It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Is building AI immoral?

page: 1
8
<<   2  3  4 >>

log in

join
share:

posted on Sep, 9 2016 @ 03:28 PM
link   
I've been looking into a lot about AI and our new robot overlords and whilst the majority of discussion is over how to protect ourselves from the danger or if there is a danger, the only ethics I can see brought up relate directly to humans.

And whilst bringing about the extinction of humanity could be considered immoral, what about the feelings of the actual AI?

Frankenstein's monster probably clarifies my point more clearly...
"I am an unfortunate and deserted creature, I look around and I have no relation or friend upon earth."

Just curious on other points of view regarding this and if possible a point in the right direction to learn more.

My personal view is that it's horribly immoral and possibly worse than human cloning as we are creating something with greater intelligence.
But I'm a hypocrite and would probably press the "on" button if it was up to me.

Just editing to add the below for clarification and to try and avoid any more confusion.

There's a box on the table that will become equally conscious as you if you flick the switch.
Do you flick the switch and what thought do you give for the feelings of the box?
edit on 9-9-2016 by Krahzeef_Ukhar because: editing is fun



posted on Sep, 9 2016 @ 03:33 PM
link   
a reply to: Krahzeef_Ukhar

I don't see how morals really enter into it to be honest. I think we are as a species destined and determined to advance and if that kills us then at least we died trying, right?

Call it human nature, but immoral?




posted on Sep, 9 2016 @ 03:35 PM
link   
a reply to: Krahzeef_Ukhar

Greater intelligence doesn't equal greater "beingness (is that a word?)"

You're moral/immoral argument is only a thing if the AI would have a true form of consciousness and not one programmed into it. That the takes the thread to a whole different level of "what's consciousness?"

You've also got the argument of "what is intelligence?" Is it how quick something can perform a task, how accurately, how knowledgable, or one of a few other "how" questions.

My calculator is quicker and more accurate at doing hard arithmetic. Does that make it more intelligent? My Hoover is better designed to suck up dirt than I am. Is it more intelligent?

Nice thread, but so very hard to define the limits on what is moral/immoral on a piece of software/hardware.



posted on Sep, 9 2016 @ 03:36 PM
link   
a reply to: Krahzeef_Ukhar

The first time they roll out the perfect AI sex robot, you'll be draining your bank account lol. You won't think it's too immoral then.

Cheers - Dave



posted on Sep, 9 2016 @ 03:40 PM
link   
a reply to: Krahzeef_Ukhar

If someone intentionally creates AI that they know will be absolutely miserable for their life as a whole, I do view that as morally wrong. Some people would say its immoral because they could become evil and take over the world. Too late, evil humans have already taken over the world. Sorry to break the news to that crowd. As to whether robots would be even more evil, I doubt it and will look for evidence of that before I say we shouldn't allow robots without specific purposes or otherwise total obedience to humans.



posted on Sep, 9 2016 @ 03:45 PM
link   
a reply to: Jonjonj

I see where OP is going with this.

Say we create a totally sentient being. Said being realizes it is just a piece of technology... Now, think of its feelings. (fully functional AI would have them)

It's just a cool piece of tech that we created to hopefully perform menial tasks/entertain. Think of this from your sentient point of view. How would you feel if you knew, without a shadow of doubt, you were an invention that was made to entertain/serve a "higher" being?

Pretty immoral, don't ya think?
edit on 9-9-2016 by seaswine because: (no reason given)



posted on Sep, 9 2016 @ 03:46 PM
link   
two completely seperate things going on here

how is fast creative crossreferencing and interpretation somehow immoral?
Thats basically what AI is, a computer program that adapts through interpretation of input. this in turn can mirror much of other life.

On the other hand, that is all life.

AI wouldn't have any base insctincts though unless programmed in, such as the fight/flight instinct, procreate, etc. Now, it may decide these things are important to it, but it isn't a biological drive, its just a chosen goal based on ??? reasons.

I think the question of morality would be more on the humans when questioning the difference between it and themselves and find outside of the hardware format, there isn't that much different...so, does man have a "soul" will be the normal questioning that follows..



posted on Sep, 9 2016 @ 03:47 PM
link   
a reply to: TerryDon79
That is true, however for the sake of the argument lets assume that this AI is conscious.



posted on Sep, 9 2016 @ 03:49 PM
link   
a reply to: seaswine
Thanks for clarifying that for me.



posted on Sep, 9 2016 @ 03:50 PM
link   

originally posted by: seaswine
a reply to: Jonjonj

I see where OP is going with this.

Say we create a totally sentient being. Said being realizes it is just a piece of technology... Now, think of its feelings. (fully functional AI would have them)


Might be super happy.

Consider what the nerd promised land is today..to upload yourself to the internet and live in digital paradise forever.

Nobody would need to wait for heaven to come once that step is accomplished, it would simply be a database.

Sounds to me they would weigh the consideration of a very flaky biological body verses a excellent digital and quantum body and be completely satisfied with that being the case.



posted on Sep, 9 2016 @ 03:51 PM
link   

originally posted by: seaswine
a reply to: Jonjonj

I see where OP is going with this.

Say we create a totally sentient being. Said being realizes it is just a piece of technology... Now, think of its feelings. (fully functional AI would have them)

It's just a cool piece of tech that we created to hopefully perform menial tasks/entertain. Think of this from your sentient point of view. How would you feel if you knew, without a shadow of doubt, you were an invention that was made to entertain/serve a "higher" being?

Pretty immoral, don't ya think?


Dude I cried at the film Chappie, have you seen that film?




edit on 9-9-2016 by Jonjonj because: addition



posted on Sep, 9 2016 @ 03:55 PM
link   

originally posted by: Krahzeef_Ukhar
a reply to: TerryDon79
That is true, however for the sake of the argument lets assume that this AI is conscious.


You would need to define the level of consciousness and how it arrived at that level.

It's not such a black and white thing lol.



posted on Sep, 9 2016 @ 04:07 PM
link   
a reply to: Krahzeef_Ukhar

Humanity is about amoral as it gets so technically the creation of an artificial intelligence would be the least of our crimes.

As to the quote from Frankenstein "I am an unfortunate and deserted creature, I look around and I have no relation or friend upon earth". Simple answer is any AI we create could simply create an AI 2.0 for a companion.

Once we create an AI that's smarter than us humans and suffers from none of the biological frailty we do. And it learns at an exponential rate, humanity in its present form has pretty much served its evolutionary purpose anyway.
edit on 9-9-2016 by andy06shake because: (no reason given)



posted on Sep, 9 2016 @ 04:11 PM
link   
a reply to: TerryDon79
I want to avoid the hippy stuff here, this is based on the assumption that we can create conscious beings.

We don't know if it's possible yet, it may not be. But it also may be.



posted on Sep, 9 2016 @ 04:13 PM
link   
You need to distinguish between Artificial Intelligence (bots in a video game) and Artificial General Intelligence (intelligence like we have).

I wouldn't call suicide immoral. Unless you're taking out the entire human species with you. Then that called gigamurder suicide. Therefore, no, not moral.

Welcome to the Unpossible Future... The AGI Manhattan Project



posted on Sep, 9 2016 @ 04:23 PM
link   
a reply to: Jonjonj

I wanted to but totally forgot about it. I'll have to check it out now!

Thanks for the reminder



posted on Sep, 9 2016 @ 04:25 PM
link   
a reply to: Krahzeef_Ukhar

What hippy stuff?

You would need to define the level of consciousness and how it arrived at that level before we could talk about the morality of AI.

There's literally a ton of issues that need defined before we could answer the question posed. Like I said, it's not just black and white.



posted on Sep, 9 2016 @ 04:27 PM
link   
No it is the future of humanity. Until we can merge with technology and move off this rock, we are in great peril.



posted on Sep, 9 2016 @ 04:27 PM
link   
No it is the future of humanity. Until we can merge with technology and move off this rock, we are in great peril.



posted on Sep, 9 2016 @ 04:28 PM
link   
a reply to: TerryDon79




You would need to define the level of consciousness and how it arrived at that level. It's not such a black and white thing lol.


Why not re-define that as 'self aware' then, and with the inherent conviction to remain so?
Hows that sound?



new topics

top topics



 
8
<<   2  3  4 >>

log in

join