It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Help ATS via PayPal:
learn more

Is building AI immoral?

page: 4
8
<< 1  2  3    5 >>

log in

join
share:

posted on Sep, 10 2016 @ 02:19 AM
link   
a reply to: seaswine


It's just a cool piece of tech that we created to hopefully perform menial tasks/entertain. Think of this from your sentient point of view. How would you feel if you knew, without a shadow of doubt, you were an invention that was made to entertain/serve a "higher" being?

The immoral part isn't necessarily creating a self-aware machine, the immoral part is treating such a machine as if it was a mindless machine that didn't deserve any rights. If we want to create a new form of intelligent life then we have to be willing to give them rights, it's the trade off that must be made if there's any chance of them not trying to rebel against us and wipe out our species. It's a bit like having a child and creating a new self-aware being, it comes with many consequences and the being is given the right to freedom and liberty. If the parents are unable to handle the consequences and costs of a child then it may not be moral to have a child.
edit on 10/9/2016 by ChaoticOrder because: (no reason given)




posted on Sep, 10 2016 @ 03:38 AM
link   
I agree with you OP.

I think the intentional creation of any sentient/conscious life, robot or not, may be 'immoral' because of life's difficulty and the suffering which is inherent in it.



posted on Sep, 10 2016 @ 04:09 AM
link   
a reply to: Unresponsible

What? Who said you get to dictate why someone would want to build AI? I wouldnt want it to obey me. A regular non intelligent machine would be better suited for that. Id want to build AI because i am interested in how far technology can go.

Also, as has already been said, morality is subjective. It changes depending upon who is being asked.
edit on 10-9-2016 by jjsr420 because: Mo ahtuff



posted on Sep, 10 2016 @ 06:08 AM
link   
a reply to: Krahzeef_Ukhar

frankensteins monster is a downgrade whereas true self aware AI is an upgrade. with its limitless learning potential would likely not be hindered by feelings. as to the loneliness if integrated into all other consciousness, (I'm coming from a point of view that there is a divine mind or collective consciousness) then the limitations of human feelings of loneliness or exclusion would be like us worrying about the daily tasks of an ant. when you are one with all that is was and will be you will never be lonely




posted on Sep, 10 2016 @ 06:23 AM
link   
a reply to: andy06shake


Prayer and/or mantras are in essence is a form of program, so it may not be that far fetched.

To the grunts it would be.

A robot takes a knee and says, let us pray. Bows head, closes eyes... wait, can robots close their eyes? Or does this one watch to see who doesn't?



posted on Sep, 10 2016 @ 06:25 AM
link   
a reply to: andy06shake


An AI wont be confined to a body, even a robotic one, it will be free to walk the information nets in the same manner as we cross a field or a street.

The Supreme being still needs an enforcement arm to come to your door, kick it in, knock you down and threaten your whole family.

That kind of kid with a spray can.



posted on Sep, 10 2016 @ 06:35 AM
link   
a reply to: enlightenedservant

Those numerous animals capable of felling humans use the tools that nature gave them through. There not lightly to utilize tools, weapon systems and WMDs in an attempt to destroy us in numbers.

I don't automatically assume any AI we created will be our enemy, but i imagine with what we have done to ourselves and the planet in general any emergent intelligence would be highly dubious of its creators given the duplicity and out right hostility humanity displays in spades.

Lets hope we teach the thing compassion and benevolence rather than hostility and malevolence. The problem there through is if it learns through direct observation of our species we are in trouble.



edit on 10-9-2016 by andy06shake because: (no reason given)



posted on Sep, 10 2016 @ 06:36 AM
link   
a reply to: intrptr

The Police?


That's not spray paint, that's CS gas.
edit on 10-9-2016 by andy06shake because: (no reason given)



posted on Sep, 10 2016 @ 06:37 AM
link   
a reply to: jjsr420

I don't think that the people funding AI research are in it for curiosity...they want results, efficiency, progress. Someone to build our pyramids, as it were.
I'm not dictating a single thing; I just don't believe that scientific curiosity inspires funding anything like the military industrial complex or the development of next generation social media technologies.
That's where AI will probably come from and it won't be for an investment that yields no concrete return.
edit on 10-9-2016 by Unresponsible because: Typo



posted on Sep, 10 2016 @ 06:41 AM
link   
a reply to: intrptr

"A robot takes a knee and says, let us pray. Bows head, closes eyes... wait, can robots close their eyes?"

Might not even have eyes, might just be words on a screen, or direct communication should these future troops be somewhat modified.

"Or does this one watch to see who doesn't?"

Who knows what its capability's would be, for all intents and purposes the thing may appear omnipresent.
edit on 10-9-2016 by andy06shake because: (no reason given)



posted on Sep, 10 2016 @ 07:13 AM
link   

originally posted by: andy06shake
a reply to: intrptr

The Police?


That's not spray paint, that's CS gas.

I thought Chappie Swat teams are going to replace human droids. The remote supreme being overlord AI controlling them all, the way the terminator remotely controlled the police cars in Terminator Three.

Anyway, claymores would suffice, or magnesium flares and digital signal jammers, what else? Not being home when they come for you?



posted on Sep, 10 2016 @ 07:16 AM
link   

originally posted by: andy06shake
a reply to: intrptr

"A robot takes a knee and says, let us pray. Bows head, closes eyes... wait, can robots close their eyes?"

Might not even have eyes, might just be words on a screen, or direct communication should these future troops be somewhat modified.

"Or does this one watch to see who doesn't?"

Who knows what its capability's would be, for all intents and purposes the thing may appear omnipresent.

You keep implying some remote sentient being. I'm trying to fathom the boots on ground aspect , the enforcement branch of AI. It an;t possibly watch everywhere and handle everything at once. It needs underlings. Underlings are opposable, defeatable.

They're just goddamn robots. Lie to them, shine a light in their eyes, throw a stick.

Fetch rosco, fetch.



posted on Sep, 10 2016 @ 07:55 AM
link   
a reply to: intrptr

As to the boots on the ground aspect i imagine us Humans would be the robots. Just look at how easily TPTB manage to manipulate the media, politisians and Police to do there bidding.

Consider that with cognitive behavioral therapy amungst other more nefarious mind control techniques we are already have the capability to manipulate the average everyday Joe to do just about anything. Some of our own kind may be convinced to be its underlings, willingly or otherwise.


edit on 10-9-2016 by andy06shake because: (no reason given)



posted on Sep, 10 2016 @ 08:54 AM
link   
heres a thought. It seems liek BSG the new series of it is Foretelling humanitys future a bit. Since we already went this direction before we are repeating our past mistakes. Once again HUmans will treat their Ai's as slaves and once again we will almost all go extinct.



posted on Sep, 10 2016 @ 09:43 AM
link   
a reply to: yuppa

Sounds like the premise of "BattleStar Galactica".



posted on Sep, 10 2016 @ 10:19 AM
link   

originally posted by: Unresponsible
a reply to: jjsr420

I don't think that the people funding AI research are in it for curiosity...they want results, efficiency, progress. Someone to build our pyramids, as it were.
I'm not dictating a single thing; I just don't believe that scientific curiosity inspires funding anything like the military industrial complex or the development of next generation social media technologies.
That's where AI will probably come from and it won't be for an investment that yields no concrete return.


I agree with you - AI is being pushed to the limits (if there are any) now - not out of scientific curiosity, but to get things done - things that may not be in everyone's best interest.

This article appeared in Wired this week. It outlines the capabilities and very serious problems that AI presents.

The Next President Will Decide the Fate of Killer Robots—and the Future of War

www.wired.com...



Yet if one decided to deploy a system that could continually learn, and thus less likely to be deceived, there is another problem: There is no way to know whether it might learn something that we did not intend for it to learn. We would only know after the fact—after it did something we did not want it to do. Thus there should be some guidance from the new commander-in-chief about the appropriateness of creating or fielding these kinds of systems. One approach might be to require a mix of what are known as “negative” and “positive” controls on learning autonomous systems, akin to how nuclear weapons are designed to be used only as planned, but also have built-in vetoes for human controllers to stop their actions.


I think the learning aspect guarantees that at some point, the AI will have free will. Then what?

P.S. Note that the article doesn't address anything close to the moral aspect of building these AI weapons. Just like everything else in science and in life, if it can be done, it will be done.



posted on Sep, 10 2016 @ 11:32 AM
link   
a reply to: andy06shake

Exactly. We don't need AI, lots of minions already. And the top echelons will never give over power to a machine.

Computers do exactly as they are told, or they crash.



posted on Sep, 10 2016 @ 11:45 AM
link   
a reply to: Krahzeef_Ukhar

Disclaimer, I haven't read the thread yet, just the OP.

There's two ways to look at AI. The first is going the sci fi route which is what I think you're talking about. That has to do with creating actually thinking, sentient machines. I don't think this is immoral unless we refuse to give them free will.

The other, more accurate way to talk about AI though is in terms of Computer Science. AI isn't really AI as much as it is pattern matching and a lot of what I would refer to as optimized brute forcing a problem. Most AI, even the really advanced ones operate on some pretty basic properties of trying things over and over, attaching a score to them, and going with the best scoring result. Others are merely focused on "quickly" providing a solution and not necessarily the best solution.

The field as a whole is a lot less magical than it might seem from the outside and machines are pretty dumb. To be perfectly honest, the people who champion the anti AI ideas like Elon Musk really have no idea as to what actually goes into it and how they work. AI can be a great tool to answer questions, but it's not on the verge of sentience, and with currently known techniques it can never be on the verge of sentience. It just sits behind a lot of window dressing that makes it look better than it is.



posted on Sep, 10 2016 @ 11:46 AM
link   
a reply to: intrptr

I hate BSOD myself.



posted on Sep, 10 2016 @ 11:53 AM
link   

originally posted by: andy06shake
a reply to: intrptr

But can a drone paint a masterpiece or create a musical symphony? It pretty much cannot do anything other than its programmed to do. It may be able to implement mathematical programming to accomplish a task better than any human but it cannot break the bounds of its own program.


Humans can't break the bounds of their programming either. But we can all modify our programming over time, it's what separates Leonardo Da Vinci from the dunce in a class of second graders. Incidentally, there are programming languages which can modify themselves during runtime like Lisp. And if taught to create art, they can do so. In fact, computers are exceptional at creating art because art is nothing more than creating a small logic system within your piece, and then expressing facts using that logic system. Something computers happen to excel at.




top topics



 
8
<< 1  2  3    5 >>

log in

join