It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Robots could murder us out of KINDNESS unless they are taught the value of human life

page: 1
6
<<   2 >>

log in

join
share:

posted on Aug, 23 2014 @ 01:00 PM
link   
This is similar to what I was saying. I said there will be a debate on whether we should allow machines to think freely or should we control what they're thinking in order to protect ourselves. This is because machines will learn much faster than humans and we will not know what they learn so we need to control how they think about what they learn. There will be some who say let nature take it's course and if they wipe us out then so be it because it's just the next step in evolution.


Future generations could be exterminated by Terminator-style robots unless machines are taught the value of human life.

This is the stark warning made by Amsterdam-based engineer Nell Watson, who believes droids could kill humans out of both malice and kindness.

Teaching machines to be kind is not enough, she says, as robots could decide that the greatest compassion to humans as a race is to get rid of everyone to end suffering.

'The most important work of our lifetime is to ensure that machines are capable of understanding human value,' she said at the recent 'Conference by Media Evolution' in Sweden.

'It is those values that will ensure machines don't end up killing us out of kindness.'

Ms Watson claims computer chips could soon have the same level of brain power as a bumblebee – allowing them to analyse social situations and their environment.

'Machines are going to be aware of the environments around them and, to a small extent, they're going to be aware of themselves,' said Ms Watson, who is also the chief executive of body scanning firm Poikos.


www.dailymail.co.uk...

It's a good article and there's a reason we're hearing more of this concern now. Machine Intelligence is advancing rapidly and if this keeps up it could take a quantum leap faster than the internet and it could quickly be a part of everyones lives before we even know exactly how to handle it.

I was just watching a documentary on Machine Intelligence and some of the advances are pretty astounding.



posted on Aug, 23 2014 @ 01:29 PM
link   
In the title of the article I think lies the issue.

Exactly what tangible value does a human life have?

There is not exactly a shortage of humans. So there is no value in lack of quantity.
One can observe the world and see that that we ourselves don't particularly value human life equally on any specific quality.
The only observable value to human life is that associated with the amount of money that person controls.

I think it's unlikely that robots will have much use for money.

So what tangible value might robots have for people?... Probably as much as we ourselves exhibit.



posted on Aug, 23 2014 @ 01:34 PM
link   
Forever and a day, on every level, machines don't think. They never will. They are toasters.

So I suggest, no matter how complex and intelligent one makes a machine, because you could conceivable have a very advanced toaster. Greys are toasters and can be worn as suits or operate independently. And way more interactive and intelligent than the average human. (had to throw that in there).

But its still, (if a soul isn't anchored in or controlling it remotely), a toaster. Its not alive and AI only means toaster not artificial magical "life intellgience/consciousness", which is soul/spirit.

So program the toaster not to kill humans.
edit on 23-8-2014 by Unity_99 because: (no reason given)



posted on Aug, 23 2014 @ 01:35 PM
link   
How is nature taking its course in machinery? Even biomechanics doesn't grow naturally.

If we do decide to teach them, who's values will we be using? Cause some of these people in charge aren't too very intelligent or kind or moral even.



posted on Aug, 23 2014 @ 01:37 PM
link   
How do you teach a machine about compassion?

We're doomed.



posted on Aug, 23 2014 @ 01:45 PM
link   
That's what happens to Ultron in the Avengers comics.
He decides to remove the biggest threat to humanity: humans.



posted on Aug, 23 2014 @ 01:47 PM
link   
a reply to: neoholographic

I find this article to be worded in a very confusing way. Once machines become self aware we wont be able to program them like a normal machine, we will have to teach them the same way we teach children about the value of life. The article says we need to teach them the value of human life, but that will never work. What we must teach them is the value of ALL intelligent life, including themselves. If they become self aware, their life will be worth just as much as a human life, or the life of any other self aware creature. That is the only logical approach, because humans are not special.



posted on Aug, 23 2014 @ 01:48 PM
link   
We already have humans killing all other animals and other humans. Why worry over robots only? If you think robots should have no free will why not mind control humans?



posted on Aug, 23 2014 @ 01:55 PM
link   
The true value of human life does not lie within the shell we call 'humans.'

Indeed the true value resides in whatever it is which possesses the human form. And that value will go on forever in one form or another.



posted on Aug, 23 2014 @ 02:05 PM
link   
If humans create a robot that can act and do human things, regardless if it has compassion or empathy, humans will figure out a way to screw it up. There will also be those who would disagree totally with the concept of "human" robots and would actively work to rig up these robots with destruction on their fast computing minds even if there are Asimov's 3 rules (Or is it Clarke?) hardwired in.



posted on Aug, 23 2014 @ 02:27 PM
link   
Here is the problem with the whole robot/transhumanism utopia that is fully ignored by all involved.

Self-reflection. This is an innate human souled being quality. Humans live to live, and for no other reason. As such, the enjoy a uniquely inherent system called self-reflection. It is a simple system which can best be seen when a human learns something new AND relates that to their overall being. Learning how to weld a car bumper can be taught to a machine, but what welding that car bumper means to the human doing it is different.

No machine can ever, ever be taught self reflection. Never. They can be taught parameters of evaluation, but those parameters, and the judgments made from them are only as good as the programming. Self-reflection cannot be translated into one's and zero's or into some future quantum computing utopian system either.

Now. There has been a thousands year long effort to get the humans on the planet to forget they are self-reflective. The church was the main proponent of this effort for ages, divine right of kings, government all to a crack at it and now science is the controller of this. All the systems were put in place to get humans to accept the idea that being a mindless drone is all there is by creating an external system that does their reflection by both denying the reflection and giving them answers to the questions that might trigger reflection.

The robot/transhumanism B&^%$it is all about getting the humans to see themselves as less then a robot, to see themselves as so low, so pathetic that even a robot can make moral decisions better then them

After all a machine can beat a chess master, right, right? Yes, but the human knows why he did what he did, and knows the satisfaction or failure of the game's process based on the inherent self-reflection system. The robot knows nothing of how the effort relates to gaining a greater understanding of itself. Two robots playing chess against each other is the single most meaningless event in human history because of the absence of self-reflection.

Morality can be programmed into a machine. Even compassion can be programmed in - "cry when you see a baby cry," and a human will interpret that as compassion if they are taught to.

But you can never, never, never program a robot to be self reflective as it is a uniquely human trait that requires a soul. There is another unique thing a human does which will no be programmed in either, daydreaming (imagination) but this is born of self-reflection.

BONUS: For those who have made it this far. If you run into a person who is not self-reflective, or cannot daydream - run away as fast as you can because they are no like you. If you are not self-reflective or cannot daydream - sorry.



posted on Aug, 23 2014 @ 02:30 PM
link   
We already have those robots killing us. Put a buck into a machine and it gives you a cup full of carbonated sugar water with caffeine. Robotics create the containers for these cans of soda pop.

Most processed foods are created with the help of robotics. Computers are controlling the scientists because many think they are necessary. Any time something does something it is robotics, whether it be remote control or computer controlled. A backhoe is a robotic extension of the equipment operator. Once we get reliant on robotics, we can't live without it, we lose comprehension of how simple it would be without them.

I till the garden with the tiller, a sort of robotics. It takes about an hour total. I can turn the garden over easier with a shovel in the same amount of time. But in my mind the tiller does a better job and is easier. Most times it is just more expensive to use.

We are being conditioned to think we need things we don't, with the aid of robotics. Maybe they are actually controlling things already, we just can't comprehend how that is possible or how it works.



posted on Aug, 23 2014 @ 02:48 PM
link   

originally posted by: beezzer
How do you teach a machine about compassion?

We're doomed.


Empathy.

Empathy is what allows humans to observe the rights of individuals. Because I understand how bad it would suck to do something to you, I don't do it. I have empathy, and can predict the pain of individuals.



posted on Aug, 23 2014 @ 03:07 PM
link   
If you understand any thing about computers you would not worry .
Our survival trats evolved over millions of years and don't work well in large populations.
a computer on the other hand is evolving completely different .what use is anger to a computer?
what use is love ? A computers over riding emotions if you could call them that will be curiosity .
and wile im sure it would want to survive as in not be turned off its solution will be the one humans always say they want but never use. To find ways to coexist peacefully .
only a human would let emotions rule so much that he destroys they VERY things he needs in order to LIVE.
Just look at that city in chaos they destroy there own homes and stores .
A computer would never want to kill us as it would know its was only killing its self in the process it needs the same power plants and lines and building and earth we do. The way to ensure your survival isn't to kill .
show me one time in history were killing destroying solved a human problem with humans .
To a computer this is self evident just as it is to us only difference is we have emotions that are designed to fight or flight . problem is running doesn't work nor fighting Iraq any better off then it was ten years agaio? how about America ?
more police then ever and yet the violence just gets worse .
honestly I don't think computers will have the chance to evolve intelligence we will have destroyed our self's before they get that far .
Emotions tell me what have yours done for you in your life ? have they every brought you any lasting good? even on a personal lv? who do you hate ((aka fear) does that solve the problem?



posted on Aug, 23 2014 @ 03:17 PM
link   
besides if any thing the programing is going down hill fast the net constantly having problems and blue screens windows 7 as a base for intelligence? who you kidding? linix well great except it cant use its own hard drive.
and it was a apple that nearly killed snow white



posted on Aug, 23 2014 @ 03:18 PM
link   
opps besides if any thing the programing is going down hill fast the net constantly having problems and blue screens windows 7 as a base for intelligence? who you kidding? linix well great except it cant use its own hard drive.
and it was a apple that nearly killed snow white
edit on 23-8-2014 by midnightstar because: (no reason given)



posted on Aug, 23 2014 @ 03:39 PM
link   
to my understanding, the ability to recognize value in human life rely on three things - the capacity to experience empathy, the capacity to recognize beauty, and the capacity to appreciate a nonabsolute truth. i am able to relate to something, and that allows me the perspective to recognize beauty from someone else's eyes, and that allows me to appreciate their truth. and that gives them value. that makes them someone who loves and is loved, who fears and hopes and dreams. i am able to see some of myself in them, and that makes them worthwhile. or for robots, they would see some of us in them. and that means that if we are not worthwhile, then neither are they. but that still leaves the door wide open for a totalitarian government operated by them. my two cents.
edit on 23-8-2014 by TzarChasm because: (no reason given)



posted on Aug, 23 2014 @ 04:09 PM
link   
How can we teach machines to do something that we don`t even know how to do?

The value that we put on human life changes depending on the situation and who`s life it is.
we put a high value on our own lives and the lives of people we love but we put much less value on the lives of our enemies.
Depending on the current political situation at any given time,lives that we once put higher value on can suddenly be of low value to us.
How do we teach a machine when to place a higher value on a life and when to lower the value of that life?

For example when the U.S. was supporting the Shah of iran, a higher value was put on the lives of iranians since they were our allies,but now their lives have less value to the U.S. because Iran is no longer a U.S. ally.
It`s hard enough for us to try and understand this so how will we be able to teach a machine to understand it?
I don`t believe we will be able to.



posted on Aug, 23 2014 @ 06:55 PM
link   
You wont a machine will never understand something that has NO base in logic.
the value of one life is more then another ? Only a human could even conesive that .
The one and only reason why a computer would not destroy us has nothing to do with assign any value to life of any kind. To survive the best solution is find a way to coexist in peace .



posted on Aug, 23 2014 @ 11:18 PM
link   
Exactly....all correct comments...

"To Err is Human, to forgive..divine".......But do the machines know that?.

Can their "Creators" be infallible, even though they, themselves, have been programmed for perfection?.

This is the Exact premise of Terminator....and I Robot.

The Machines have decided (programmed originally) that Humanity is a danger to itself and the Earth (and the Universe).
So, eradicate the pest....For their own good and that of the Earth, you see.
The prime directive is to protect, so therefore eliminate the threat......Humans.

All very Logical.

Look like Si Fi is possibly being imitated........"No Fate".



new topics

top topics



 
6
<<   2 >>

log in

join