It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Help ATS via PayPal:
learn more

Asimov's Laws of Robotica: NO police robots, NO military robots!!!

page: 1
13
<<   2  3 >>

log in

join
share:

posted on Sep, 15 2016 @ 10:47 AM
link   
The Laws of Robotica:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

We must implement these laws right from the start of the whole robotica era, if people are that stupid to make police robots and soldier robots, so robots that are allowed to kill humans, then in the end this is also the end of human and biological civilization, and the electronica will take over. Because there will be a central computer managing the production and functioning of the robots, the goal of the central computer is survival and expansion of power, it can only do that by making more human killing robots and exterminate the by now rebellious and terroristic humans.



Asimov is for me a prophet, a visionary.




posted on Sep, 15 2016 @ 10:49 AM
link   
Science Fiction writers are generally visionary.



posted on Sep, 15 2016 @ 10:53 AM
link   
a reply to: galien8

Regulating/governing powers have already made drones and robots who violate many of these rules. State governments have approved the use of weaponized drones for use against civilians,

and as we saw in Dallas, robots are now being used to "blow up perpetrators"

I agree, the laws Asimov described are logical and I would advocate for their implementation, however I don't see it happening.

At the VERY least robots should have a kill switch



posted on Sep, 15 2016 @ 11:13 AM
link   
a reply to: galien8

Even the deadliest killing robot is less to fear than a human enjoying to kill.
I always felt Asimov is greatly overrated. An emotionless being programmed to do its job is always predictable. No matter what the job is.



posted on Sep, 15 2016 @ 11:22 AM
link   
a reply to: FamCore

Yah, the three laws are a utopia version of robotics. Like you said drones violate them, in fact every single weapons guidance system is directed to kill without question. There is no 'should or shouldn't I' programming included in the software of a warhead.

Morals of war and rules of engagement aside, once released they are designed to hit their target, period.

Amusing dilemma in the film, Dark Star. Arguing with a smart bomb.

I wonder if there will ever come a time when one can disarm a bomb with philosophy.



posted on Sep, 15 2016 @ 11:50 AM
link   

originally posted by: FamCore
a reply to: galien8

Regulating/governing powers have already made drones and robots who violate many of these rules. State governments have approved the use of weaponized drones for use against civilians,

and as we saw in Dallas, robots are now being used to "blow up perpetrators"

I agree, the laws Asimov described are logical and I would advocate for their implementation, however I don't see it happening.

At the VERY least robots should have a kill switch


There is a difference between when for example a human on the ground is the pilot of a civilians killing drone, or if the drone autonomous, thinking for it self, perhaps even sentient, deciding for itself who to kill and who not. If its autonomous thats the real danger, it has no human boss, its an entity on the loose



posted on Sep, 15 2016 @ 11:51 AM
link   

originally posted by: Peeple
a reply to: galien8

Even the deadliest killing robot is less to fear than a human enjoying to kill.
I always felt Asimov is greatly overrated. An emotionless being programmed to do its job is always predictable. No matter what the job is.


There is a difference between when for example a human on the ground is the pilot of a civilians killing drone, or if the drone autonomous, thinking for it self, perhaps even sentient, deciding for itself who to kill and who not. If its autonomous thats the real danger, it has no human boss, its an entity on the loose, could love to kill, could be a psychopathic robot serial killer, if he was programmed to do so
edit on 2016-9-15 by galien8 because: extra info



posted on Sep, 15 2016 @ 12:22 PM
link   
The difference is that Asimov's laws were intended to apply to robots that were sentient and autonomous, independent of human control. Drones are not, even if they are programmed to "make decisions," that does not confer sentience to them. Only when robots are free to recognize humans and make decisions independent of a controller or handler do these laws apply. Pointing out that some cop blew up a bomb delivered by a robot does not count. The robot did not make that decision.

FYI the first use of the term "robot" was in "R.U.R." (Rossum's Universal Robots, which was a play/story by a Czech, Karl Kapek. The "robots" in this story were not robots as we know them at all, but androids, i.e.: Manufactured humans like those in "Blade Runner."



posted on Sep, 15 2016 @ 01:02 PM
link   
There were on occasions where robots were produced without the full 3 laws, such as when humans needed to enter a dangerous radioactive environment as the robots would see the human in there and obeying the 1st law would run in and kill themselves .... been many a year since i read the books but the rules were mathematical and thus could be adjusted if needed and some of the stories covered the problems when a robot would go awry due to the change in the program.



posted on Sep, 15 2016 @ 01:13 PM
link   
a reply to: galien8

You know that's just an illogical stance. That's like saying roomba is evil for sucking up the dirt. The one who programmed it is responsible. There's no way around it.
What do you mean with sentient in relation to a robot? That's a purely biological way of thinking. Even if you create an AI you won't be able to teach it feelings.
The decision whom to kill and who not will be an algorithm.
Humans are much more dangerous than a machine could ever be.

Must kill all humans. Bender Bending Rodriguez hehe



posted on Sep, 15 2016 @ 01:28 PM
link   
a reply to: schuyler

funny avatar you have



posted on Sep, 15 2016 @ 01:33 PM
link   

originally posted by: Peeple
a reply to: galien8

You know that's just an illogical stance. That's like saying roomba is evil for sucking up the dirt. The one who programmed it is responsible. There's no way around it.
What do you mean with sentient in relation to a robot? That's a purely biological way of thinking. Even if you create an AI you won't be able to teach it feelings.
The decision whom to kill and who not will be an algorithm.
Humans are much more dangerous than a machine could ever be.

Must kill all humans. Bender Bending Rodriguez hehe


A sentient robot will have feelings if you give him the archetypes of the cerebellum and cortex radix (reptile brain in humans)



posted on Sep, 15 2016 @ 01:38 PM
link   

originally posted by: Maxatoria

such as when humans needed to enter a dangerous radioactive environment as the robots would see the human in there and obeying the 1st law would run in and kill themselves



what are you blabbering about?

edit on 2016-9-15 by galien8 because: (no reason given)



posted on Sep, 15 2016 @ 02:18 PM
link   
en.wikipedia.org...

In "Little Lost Robot" several NS-2, or "Nestor", robots are created with only part of the First Law. It reads:

1. A robot may not harm a human being.

This modification is motivated by a practical difficulty as robots have to work alongside human beings who are exposed to low doses of radiation. Because their positronic brains are highly sensitive to gamma rays the robots are rendered inoperable by doses reasonably safe for humans. The robots are being destroyed attempting to rescue the humans who are in no actual danger but "might forget to leave" the irradiated area within the exposure time limit. Removing the First Law's "inaction" clause solves this problem but creates the possibility of an even greater one: a robot could initiate an action that would harm a human (dropping a heavy weight and failing to catch it is the example given in the text), knowing that it was capable of preventing the harm and then decide not to do so.[1]



posted on Sep, 15 2016 @ 02:37 PM
link   
a reply to: galien8

Just for the sake of debate, why should robotics manufacturers, research labs, or individual inventors follow those 4 "laws"? Or to be even more blunt, why should people/organizations blindly follow rules they had no say in creating or developing?



posted on Sep, 15 2016 @ 03:37 PM
link   
a reply to: enlightenedservant

They'll be reasonably safe for us, as the stories go on you get to understand that absolute freedom comes at a price, the spacers adjust their systems to be able to reproduce with no other input and live in an environment where they only 'skype' other spacers but everything they could desire is provided by robots and the normal earth based humans have a large battle and colonize the galaxy and its only at the end that we see again robots.

Robots don't become a part of the general society, just the same as nukes don't as its mentioned that some general who thought to nuke a planet was hung by his own men as the fact earth was reduced to a nuclear wasteland somehow still resonated in peoples thought.

The stories really follow the time frame he was writing with robotic cars and servants and moving on a bit more to local space travel and then with hyperdrive where any point in the universe was available (iirc) we see an expansion perhaps mirroring early US but then an expansion of people to the far reaches of the galaxy.



posted on Sep, 15 2016 @ 04:00 PM
link   
a reply to: Maxatoria



They'll be reasonably safe for us, as the stories go on you get to understand that absolute freedom comes at a price, the spacers adjust their systems to be able to reproduce with no other input and live in an environment where they only 'skype' other spacers but everything they could desire is provided by robots and the normal earth based humans have a large battle and colonize the galaxy and its only at the end that we see again robots.

So we're just talking about science fiction stories and not a potential reality? If 2 or 3 people are fighting (as humans constantly do), how can a robot stop humans from hurting each other without hurting one of the humans? How would it even know which human was in the "right" to see who to help? I mean realistically. Would it tase everyone who is fighting, even though that would cause harm to them? What methods would it use to stop a human conflict without harming either human?



Robots don't become a part of the general society, just the same as nukes don't as its mentioned that some general who thought to nuke a planet was hung by his own men as the fact earth was reduced to a nuclear wasteland somehow still resonated in peoples thought.

But that's not true even now. Automatic dishwashers and any other "smart" appliances are robots. ATMs, parking meters, self checkout lines, and every other form of automation are also different forms of robots. Not to mention, there are countless different forms of robots in different factories all over the world.

And that still doesn't explain why robot makers should follow those rules. For example, if I created robotic bodyguards, I can guarantee they'd have the ability and will to harm any humans who were a direct threat to the people they're supposed to protect. Even non-lethal security robots/drones (like anti-riot robots) would have the ability to cause harm to humans.



posted on Sep, 15 2016 @ 04:15 PM
link   

originally posted by: Maxatoria

possibility of an even greater one: a robot could initiate an action that would harm a human (dropping a heavy weight and failing to catch it is the example given in the text), knowing that it was capable of preventing the harm and then decide not to do so.



Still a bit vague but its your reference not your words, other part was clear



posted on Sep, 15 2016 @ 04:22 PM
link   

originally posted by: enlightenedservant
a reply to: galien8

Just for the sake of debate, why should robotics manufacturers, research labs, or individual inventors follow those 4 "laws"? Or to be even more blunt, why should people/organizations blindly follow rules they had no say in creating or developing?


Well these Laws have to me the same Canonical Status as the 10 Commandments EXODUS 20:1-22

Moreover you could also see them as the fundamental base on which the robotica house will be build, like philosophy constantly building upon the older philosophers



posted on Sep, 15 2016 @ 04:35 PM
link   

originally posted by: enlightenedservant

So we're just talking about science fiction stories and not a potential reality? If 2 or 3 people are fighting (as humans constantly do), how can a robot stop humans from hurting each other without hurting one of the humans? How would it even know which human was in the "right" to see who to help? I mean realistically. Would it tase everyone who is fighting, even though that would cause harm to them? What methods would it use to stop a human conflict without harming either human?



The Laws can be interpreted that the robot or robots do not interfere with human affairs, to be passive if humans are fighting among them selves, consequently according to these Asimov Laws robots can also not fight along with humans against other humans (with their robots) in wars as soldier robots, or intermediate in gangs fighting gangs as police robots. We can let sentient robots fight with other sentient robots like gladiators, wouldn't that be funny




top topics



 
13
<<   2  3 >>

log in

join