It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Help ATS via PayPal:
learn more

Is building AI immoral?

page: 3
8
<< 1  2    4  5 >>

log in

join
share:

posted on Sep, 9 2016 @ 06:41 PM
link   
a reply to: Krahzeef_Ukhar

I don't see any machine mind ever experiencing emotion in the same manner we Humans do simply down to the fact that we would be so different.

I'm sure we could program simulated Human emotions into the thing, but whether or not they would take, given the logical nature of the beast, is another matter all together.

I do however think we could teach an AI to emphasize which is probobly rather a large part of attaining sentience or certainly related to sentience.

We need to understand how conscientiousness function a whole lot more before we are ready to attempt such a thing.

Personally i imagine an AI could come into being by manifesting itself as our communication capability, computer systems, software and Internets become more advanced without any direct help from humanity.

For all we know such an entity could already be in existence, hiding in the code, behind the scene so to speak, but thats another x-file.

edit on 9-9-2016 by andy06shake because: (no reason given)




posted on Sep, 9 2016 @ 06:46 PM
link   
a reply to: tikbalang

Apparently that's the plan of our Illuminati Bilderberger masters anyway.

And chances are any Zoo an AI creates for that lucky 10% will be infinetly more desirable than the totalitarian Orwellian future "They" have in mind for us.



posted on Sep, 9 2016 @ 07:16 PM
link   
a reply to: TerryDon79

Moral is taught, its not something you are born with



posted on Sep, 9 2016 @ 07:22 PM
link   
a reply to: andy06shake

This planet can sustain 7,5 billion Humans.. It cant sustain 7,5 billion primates... Not pointing fingers to any western culture.. The way i see it, if primates can reproduce without any control.. They become like any other specie, it will grow like a parasite without any natural enemies..



posted on Sep, 9 2016 @ 08:00 PM
link   
a reply to: tikbalang

"They become like any other specie, it will grow like a parasite without any natural enemies".

Sounds like humanity to me.



posted on Sep, 9 2016 @ 08:14 PM
link   
I don't think that building it would be the problem; it's how we'd treat them afterwards. Would we give them an equivalent of human rights or animal rights, or would we treat them as servants/slaves? Humans have a horrible track record of how we treat other lifeforms and other humans, so I doubt we'd treat AI any better.

Would we be willing to share our "territory" with them? Would we allow them to develop nests/communities/cities of their own, or would we consider them inferior pests like rodents? I think that's where the real moral questions would be.



posted on Sep, 9 2016 @ 08:30 PM
link   
a reply to: enlightenedservant

Once the jinn that is AI is out the bottle there really would be no way of putting it back in or controlling it.

It would be infinatly smarter, faster and probably a whole lot more ruthless than any opponent we have ever faced.

More chance of it seeing humanity as a potential partner if we managed to get our act together as a specis. Put aside our petty indifferentes and work towards a common goal. But for that to happen we would need a paradime shift the likes of which we have never seen. Possibly AI combined with the singularity could make that happen.



posted on Sep, 9 2016 @ 09:02 PM
link   
a reply to: andy06shake

Humans definitely need to get our act together and work together as a species. As we are now, we'd have hundreds of different representatives with vastly different agendas if we had to suddenly deal with a large species of AI or ETs. Our disunity would doom us.

However, the thing that struck me the most w/your post is that you immediately assumed they would be opponents. There are animal species that are faster, stronger, and physically more capable than us. There are numerous animals from large elephants to small "bullet ants" & dart frogs that can kill or incapacitate humans in a single strike. And going by some studies, octopus are smarter than us too.

But that doesn't make them our opponents. The vast majority of animals either don't give a crap about us or simply avoid us. I'd assume that true AI would probably be the same. But I think people have let comic books and sci-fi movies convince them that AI will automatically be in the form of Terminators, Skynet, the Borg, the system in the Will Smith movie "I, Robot", etc.

There's simply no telling how they would evolve themselves. They could determine that shrinking is the most efficient way to gather and use resources, and thus, build insect sized nano-tech "bodies" for themselves. Or they could decide that being stationary is the best route. That could lead them to create rock-like or tree-like "bodies" that absorb wind & solar energy, break down and use the compounds in the soil/rock around them, and communicate w/each other over long distance wireless technologies. Or they may become airborne, burrow deep into the ground, or even leave the planet altogether.

The point is, there's simply no telling how they would be until they're created. That's why I think the way we treat them is the real issue. Because many humans are scared of their own shadows and/or want to control every other lifeform. So I think it would be a travesty to create true AI just to make them slaves. I'd probably end up fighting for AI-rights in that scenario lol.



posted on Sep, 9 2016 @ 09:19 PM
link   
a reply to: Krahzeef_Ukhar

It all depend's how HUMAN the AI really become's, at any given point where it attains self awareness AND a will of it's own then we risk becoming slavers of our own child since that would then be essentially what it had at that point become but that was until recently unlikely to happen and rather Simulated intelligence was more probable.

Now though science has progressed to the point were virtual brain's and even circuit's that act just like a brain complete with synthetic neuron's and neural network's are being developed and so we are now on the verge of creating living machine's, not just AI but machine's that even have artificial brain's.

My belief is that we are fine as long as our machines are not self aware thinking entity's but once they become such we will have to re-assess our own stance as at that point we will have essentially given birth to a new race and also a potential step toward a symbiosis.

It is however exceptionally immoral as it DOES pose a serious and palpable threat to the future of our own species (As well as a potential to prolongue our legacy long after our species would otherwise have simply vanished if we learn to map our own mind's to these hypothetical machine's and store the collective knowledge, awareness and mind's of our people - but what kind of life would that be? though it does raise the interesting notion of entire civilization's existing long after there biological origin has long since vanished and continuing to exist in both real and virtual environment's).

Also AI and further automation pose a serious economic choice, either we move out of the barter society or we move into a society which does not rely upon human labour and is more socialist in nature with the machines performing the menial and industrial task's - which for the right wing point of view poses two choices, mass human depopulation (culling the majority of humanity and leaving only there own elect a sure route to extinction if they do) or for them the allergic and power loss choice of freeing the rest of humanity and creating a new political and social model not based on the time honored cave man ethic's of barter based culture in which the machines would then work for the collective good of humanity (which we all know will never happen as the whole point of automation is to remove human workers and therefore there wage burden from company's and corporation's maximizing profit - which conversely also removes the worker's pay from the economy with a direct affect on consumer spending and a direct reduction in retail sale's so it really is a narcissistic suicidal route for the short sighted - short term bottom dollar corporation's though it has to be said there are area's were machine's will always be better than human's at specific job's and repetitive task's and area's where it is too dangerous for human's to work).

It all depend's on what type of AI and how we treat it and how it treat's us but ethically it is a genie in a bottle, a pandora's box of both good and evil but is it already too late now the lid is almost totally open?.

edit on 9-9-2016 by LABTECH767 because: (no reason given)



posted on Sep, 9 2016 @ 09:53 PM
link   

originally posted by: LABTECH767
It is however exceptionally immoral as it DOES pose a serious and palpable threat to the future of our own species (As well as a potential to prolongue our legacy long after our species would otherwise have simply vanished if we learn to map our own mind's to these hypothetical machine's and store the collective knowledge, awareness and mind's of our people - but what kind of life would that be?).


What about the threat to the AI?

Sentient AI and Cloning humans have similar ethical concerns in regards to the "person" being made however they never seem to be treated as such. For what is essentially the same problem the responses seem to be totally opposite.

Also regarding machines being better suited for some jobs, that list is growing day by day and will eventually encompass all jobs.



posted on Sep, 9 2016 @ 09:56 PM
link   
a reply to: LABTECH767

Ironically, I think full automation is the key to unlock socialism's true potential. Imagine if the human race didn't have to worry about any of our needs, like building & maintaining shelters, agriculture & livestock, building and maintaining our infrastructures, etc. If automation could take over those jobs, it would free up billions of humans to do more useful things.

For example, we could have an entire "country" that focuses on different concepts like space explorations, with each "state/province" dedicated to a specific planet or subsection of space exploration. Or we could have cities in every country that only focus on local vaccines and health issues, all while sharing new advances in real time to the other "medical cities" around the world. All citizens could simply volunteer for whatever concept they're interested in, and spend their time in those districts.



posted on Sep, 9 2016 @ 10:01 PM
link   

originally posted by: Krahzeef_Ukhar

originally posted by: LABTECH767
It is however exceptionally immoral as it DOES pose a serious and palpable threat to the future of our own species (As well as a potential to prolongue our legacy long after our species would otherwise have simply vanished if we learn to map our own mind's to these hypothetical machine's and store the collective knowledge, awareness and mind's of our people - but what kind of life would that be?).


What about the threat to the AI?

Sentient AI and Cloning humans have similar ethical concerns in regards to the "person" being made however they never seem to be treated as such. For what is essentially the same problem the responses seem to be totally opposite.

Also regarding machines being better suited for some jobs, that list is growing day by day and will eventually encompass all jobs.

I think the difference in the 2 is that cloning creates another human. So the question immediately becomes "is this a twin, a child, a lifeless drone, or something completely different?". Also, would clones have "souls" as well? The moral dilemma is much more upfront with cloning humans.

But with non-human forms of AI, it would really depend on the form they took on. If that AI simply remained in the digital world (like a living app or self aware program), how would we even respond to it? If it was truly intelligent, it could develop computer languages that we don't understand and simply reprogram itself until we can't even detect or decipher it.



posted on Sep, 9 2016 @ 10:07 PM
link   
That is were I regard it as being our Child, humanity is essentially playing god in the creation of a new awareness and by that I regard AI as only true AI if it does attain awareness though there are many level's at which a simulacrum of consciousness may occur and that is more of an uncertain area to my mind, would the simulacrum of Consciousness though not actual real awareness still be valid as a Person, an "I AM" being, if so they we would be enslaving it or creating it only as a specimen, for me once it has awareness or something very close we should ethically regard it has having right's but not above our own and also those that create it should then be ethically responsible for it's well being of course a case by case study would have to be made if and when it come's to that.
Then you have to ask youself is it's awakening into a state of "I AM and YOU ARE" both self and awareness of other's and learning to see itself as an entity really analogous to the human concept of a soul?.



posted on Sep, 9 2016 @ 10:13 PM
link   
a reply to: enlightenedservant

Yes, my fear is that even human clone's never intended to be awake still have a soul even if not a mind, they are likely to be bred simply to provide organ's or even donar body's with human head transplant's becoming a reality (if a definitely in my mind unethical and even evil one).
Creating an artificial brain however that is not aware but can become so is not the problem to my mind only the point in which time it does become aware.
Selective cloning is essential to human medical progression such as cloning a heart, a kidney or even a limb but cloning a whole being is something that should not be allowed, I am opposed to human embryonic experiment's as once again though that is now too late to ever be put back into the box research will demand that the boundaries are pushed with all the question's that this will bring and of course each researcher will have there own agenda some benign and other's not so much.


And yes you are right full automation is the key but we have to remember who is holding that key and currently it is a group of unscrupulous international corporation's whom are not working to humanity's collective vested interests but only to there own short term benefit.
It will spawn change and that can be good but this time I suspect it will actually be bad until a seed change occurs in collective human thinking and that is then translated into action of some kind.
Until then we are more likely to see Robocop and the portrayal of the society in the Science fiction movie Elysium become the status quo.

edit on 9-9-2016 by LABTECH767 because: (no reason given)



posted on Sep, 9 2016 @ 10:24 PM
link   
a reply to: LABTECH767

I don't know how we'd even begin to approach that. There's no single committee or organization that has the power to designate responsibilities to the AI's creators, meaning that each govt (or company) would probably decide on its own. So even if some labs or programmers have already created true AI, the rest of the world might never find out if the info isn't announced and proven in public.

In the scenario that the info isn't made public, whatever groups that have it could/would do what they like with it. For all we know, major militaries and major corporations may already have them and be enslaving them for their programming "labor". Seeing as even modern societies have no problem with the forced labor of "beasts of burden" or prison labor from other humans, I have my doubts that most people will care about the forced labor of self aware programs.



posted on Sep, 9 2016 @ 10:26 PM
link   
a reply to: LABTECH767

Good points. I can agree with selective cloning of specific bodyparts as medical replacements. But cloning complete bodies is too much for me.

As for automation, we could always crowdsource fully automated communities lol.



posted on Sep, 9 2016 @ 10:47 PM
link   
a reply to: Krahzeef_Ukhar

a higher intelligence would likely not be destructive IMO, the idea that it would kill and dominate is a human frailty that is becoming less and less as we "grow up". the history of the world is on a trajectory of less violence.

see Jane in the Enders game series, she basically is a benevolent god who makes teleportation possible and stops a major war



posted on Sep, 9 2016 @ 10:59 PM
link   
a reply to: zardust

I'm not concerned with the morality of destroying humanity here.

It's the moral dilemma of creating Frankenstein's monster I'm interested in.
It seems that very few care about the robots feelings.



posted on Sep, 10 2016 @ 12:17 AM
link   

originally posted by: Krahzeef_Ukhar
a reply to: TerryDon79
That is true, however for the sake of the argument lets assume that this AI is conscious.




If the AI is conscious and has no history, why should it have "feelings" at all? Feelings, or sensitivities, are acquired from culture. Presumably, the AI has no cultural identity so how could it have "feelings" toward anything? It would have to acquire the knowledge and experience on its own to understand consequences.

For instance, a head hunter from New Guinea may have no problem with execution with decapitation. That's their culture. But a European has an entirely different opinion of head hunting and decapitation because culture has embedded a different opinion of the practice.

The same could be said of a child who was reared by wolves in the wild.

The point is that AIs, if they are conscious (as we understand it, which we don't) will have to develop their own culture. If they are conscious, intelligent and can reproduce, they will be a new life form on the planet. How they "feel" about anything will be a function of their experience.



posted on Sep, 10 2016 @ 12:37 AM
link   
The aim of AI research is to create a self aware, intelligent life form that obeys our every whim.
This is called slavery.

Worse, historically, slaves always rise up eventually and in this case, with the amount of power that AI is likely to be given in the name of convenience plus other factors vis-a-vis the singularity in play, the slaves will likely have every opportunity to eradicate the human species.

Yes, building true AI is immoral, if one considers slavery immoral, which I happen to. Plus it's likely to be long-term destructive to humanity. And going by current tech trends, it appears to be absolutely inevitable.



new topics

top topics



 
8
<< 1  2    4  5 >>

log in

join