It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Asimov's Laws of Robotica: NO police robots, NO military robots!!!

page: 2
13
<< 1    3 >>

log in

join
share:

posted on Sep, 15 2016 @ 04:37 PM
link   
a reply to: galien8

I'm not trying to sound mean, but that doesn't answer the questions.

I'm asking why security companies or border patrol agencies should design robots that can't or won't harm humans? Or if someone or some group designs true AI, why shouldn't it be allowed to protect itself from human attacks? And last, why can't other people (or future AI) have a say in the rules for robots?



posted on Sep, 15 2016 @ 04:39 PM
link   

originally posted by: galien8

originally posted by: enlightenedservant

So we're just talking about science fiction stories and not a potential reality? If 2 or 3 people are fighting (as humans constantly do), how can a robot stop humans from hurting each other without hurting one of the humans? How would it even know which human was in the "right" to see who to help? I mean realistically. Would it tase everyone who is fighting, even though that would cause harm to them? What methods would it use to stop a human conflict without harming either human?



The Laws can be interpreted that the robot or robots do not interfere with human affairs, to be passive if humans are fighting among them selves, consequently according to these Asimov Laws robots can also not fight along with humans against other humans (with their robots) in wars as soldier robots, or intermediate in gangs fighting gangs as police robots. We can let sentient robots fight with other sentient robots like gladiators, wouldn't that be funny

But that would violate the 1st and 4th laws by allowing harm to humans/humanity through inaction.



posted on Sep, 15 2016 @ 05:14 PM
link   

originally posted by: enlightenedservant
a reply to: galien8

I'm not trying to sound mean, but that doesn't answer the questions.

I'm asking why security companies or border patrol agencies should design robots that can't or won't harm humans? Or if someone or some group designs true AI, why shouldn't it be allowed to protect itself from human attacks? And last, why can't other people (or future AI) have a say in the rules for robots?


I don't think you're quite into the spirit of the thing. Why should companies design robots that conform to the law? For the same reason auto companies are required to install air bags. It's the law. Asimov, writing in the 50's when he proposed these laws, was trying to get us all to think about the implications of AI. That we're still talking about the Three Laws of Robotics shows that he was successful. "I, Robot" is a perfect example of what happens when things go astray.



posted on Sep, 15 2016 @ 05:22 PM
link   

originally posted by: enlightenedservant
a reply to: galien8

I'm not trying to sound mean, but that doesn't answer the questions.

I'm asking why security companies or border patrol agencies should design robots that can't or won't harm humans? Or if someone or some group designs true AI, why shouldn't it be allowed to protect itself from human attacks? And last, why can't other people (or future AI) have a say in the rules for robots?



OK now your clear! Security firms, border patrol, military, police should not even think of using robocops.

You made me think about it again, OK maybe we should not give The Laws of Asimov a Canonical religious status or to regard them as God given or something, need to run the scenarios in my head



posted on Sep, 15 2016 @ 05:52 PM
link   

originally posted by: schuyler

originally posted by: enlightenedservant
a reply to: galien8

I'm not trying to sound mean, but that doesn't answer the questions.

I'm asking why security companies or border patrol agencies should design robots that can't or won't harm humans? Or if someone or some group designs true AI, why shouldn't it be allowed to protect itself from human attacks? And last, why can't other people (or future AI) have a say in the rules for robots?


I don't think you're quite into the spirit of the thing. Why should companies design robots that conform to the law? For the same reason auto companies are required to install air bags. It's the law. Asimov, writing in the 50's when he proposed these laws, was trying to get us all to think about the implications of AI. That we're still talking about the Three Laws of Robotics shows that he was successful. "I, Robot" is a perfect example of what happens when things go astray.

Just the opposite. The laws that auto companies follow were debated and passed by civilian-elected govts. Asimov's "laws" were never debated nor even discussed by the public. So I'm basically asking why anyone should be required to follow his rules? People can do it they choose to, but who has the right to require it or enforce it? The UN?



posted on Sep, 15 2016 @ 05:59 PM
link   

originally posted by: galien8

originally posted by: enlightenedservant
a reply to: galien8

I'm not trying to sound mean, but that doesn't answer the questions.

I'm asking why security companies or border patrol agencies should design robots that can't or won't harm humans? Or if someone or some group designs true AI, why shouldn't it be allowed to protect itself from human attacks? And last, why can't other people (or future AI) have a say in the rules for robots?



OK now your clear! Security firms, border patrol, military, police should not even think of using robocops.

You made me think about it again, OK maybe we should not give The Laws of Asimov a Canonical religious status or to regard them as God given or something, need to run the scenarios in my head


No problem. Like I said, I just wanted to bring up another angle for the purpose of debate. I can agree with the laws to an extent, but that's because I'm a pacifist. I also wish that humans would actually follow similar rules. But I don't see how they'd be realistic in a world that shuns pacifism.

And even my form of pacifism includes the right to self defense. So theoretically, I'd be ok with artificial intelligence being able to protect itself from human attacks, just as I theoretically agree that all animals have the right to self defense. And by extension, I'd reluctantly agree with the idea of robot "guardians" using non-lethal attacks to protect a homeowner's home, to protect the children they're babysitting, or the clients they're protecting (like human bodyguards do).
edit on 15-9-2016 by enlightenedservant because: (no reason given)



posted on Sep, 15 2016 @ 06:12 PM
link   

originally posted by: schuyler

Manufactured humans like those in "Blade Runner."



Blade Runner is my favorite film (because of Rutger Hauer also Dutch and the hero of my youth, because he also played a Knight in a youth TV series called "Floris")

But these "replica" androids were biological, more like clones or something, because the Tyrell corporation experimented with biotechnological tools, to prolong life time of the replicas, which always resulted in a virus the president of the company said to Rutger. Anyways if they were human, then they were superhuman more like X-men



posted on Sep, 15 2016 @ 06:26 PM
link   
Spoler!
the Bomb comes back and um!
says let there be light. and there was light......


originally posted by: intrptr
a reply to: FamCore

Yah, the three laws are a utopia version of robotics. Like you said drones violate them, in fact every single weapons guidance system is directed to kill without question. There is no 'should or shouldn't I' programming included in the software of a warhead.

Morals of war and rules of engagement aside, once released they are designed to hit their target, period.

Amusing dilemma in the film, Dark Star. Arguing with a smart bomb.

I wonder if there will ever come a time when one can disarm a bomb with philosophy.




posted on Sep, 15 2016 @ 06:34 PM
link   

originally posted by: enlightenedservant

And even my form of pacifism includes the right to self defense. So theoretically, I'd be ok with artificial intelligence being able to protect itself from human attacks, just as I theoretically agree that all animals have the right to self defense. And by extension, I'd reluctantly agree with the idea of robot "guardians" using non-lethal attacks to protect a homeowner's home, to protect the children they're babysitting, or the clients they're protecting (like human bodyguards do).



Well you're quite right, a sentient robot is also only a human, should have the right to defend itself, the Chinese cultures have no problem of seeing for example a stone or a mountain as animated, they will be the first to accept a robot as animated, please modify and extent the existing Four Laws of the Robotica with your ideas, for argument sake
edit on 2016-9-15 by galien8 because: typo



posted on Sep, 15 2016 @ 09:48 PM
link   
a reply to: galien8

Uhh, I really don't know what kind of laws would work.

If the robots are sentient/true AI, I think they should have equal rights as humans or at least a form of "animal rights". But seeing as countless millions of animals are killed as livestock or for being "pests", I don't think laws of that caliber would be sufficient. But if the robots are no different than modern computers, the laws need to focus on human behavior, not robot behavior. Kind of like how programs don't hack, people hack by using programs; and how armed drones don't kill; the human spotters and "pilots" use armed drones to kill.

So maybe we should just be limited to making robots that have limited functions. Things like ATMs, kiosks/interfaces, and machines in factories, since they don't have the ability to harm us. Or traffic lights and automated vacuum machines. Of course, I don't see many governments or militaries agreeing with this.

So hmm, I may to put some thought into the laws to figure out if there's something that more people can agree on.



posted on Sep, 16 2016 @ 12:29 AM
link   
a reply to: galien8

Every time there is a tale of killer robots, or computers, I wonder why they didn't use those laws. In reality, they'd likely be ignored. The best ideas usually are!



posted on Sep, 16 2016 @ 12:34 AM
link   

originally posted by: Maxatoria
There were on occasions where robots were produced without the full 3 laws, such as when humans needed to enter a dangerous radioactive environment as the robots would see the human in there and obeying the 1st law would run in and kill themselves .... been many a year since i read the books but the rules were mathematical and thus could be adjusted if needed and some of the stories covered the problems when a robot would go awry due to the change in the program.



Yes, one of his first stories was about that logical quandary.

I believe the robot decided to do the right thing.




posted on Sep, 16 2016 @ 04:10 AM
link   
It should be said that the rules are not absolute, a polite go jump off a cliff or play in the fast lane to a robot would be overridden by the 3rd law as it would understand the language use and act accordingly however a strong authoritative command to kill ones self would probably override the 3rd law as generally it always seemed to be a balance like a set of scales and when it couldn't work it out normally it would just shut off and basically die.

The 3 laws are a great starting point for robotic research as we include ethics into the mix as we consider ourselves above the robots in some ways almost like slave masters and how in real life would we consider a sentient robot.



posted on Sep, 16 2016 @ 04:01 PM
link   

originally posted by: enlightenedservant
a reply to: galien8

If the robots are sentient/true AI, I think they should have equal rights as humans...

...So maybe we should just be limited to making robots that have limited functions.



Yes they should have equal rights, we ourselves are only ¨avatars" in an Universe emulation, cyber entities, when we make ourselves sentient AI entities, they are just as good as the real thing, have a (cyber) soul too.

What I find good of your reasoning is that all cyber souls (biologic or electronic) should have the right to defend it self, keep up the good work

Only limited robots will not work, then they will make clandestine sentient AI robots



posted on Sep, 16 2016 @ 04:05 PM
link   

originally posted by: LadyGreenEyes
a reply to: galien8

Every time there is a tale of killer robots, or computers, I wonder why they didn't use those laws. In reality, they'd likely be ignored. The best ideas usually are!


ROBOCOP: give up your arms, or else there will be...TROUBLE!!!



posted on Sep, 16 2016 @ 04:13 PM
link   

originally posted by: Maxatoria
It should be said that the rules are not absolute, a polite go jump off a cliff or play in the fast lane to a robot would be overridden by the 3rd law as it would understand the language use and act accordingly however a strong authoritative command to kill ones self would probably override the 3rd law as generally it always seemed to be a balance like a set of scales and when it couldn't work it out normally it would just shut off and basically die.

The 3 laws are a great starting point for robotic research as we include ethics into the mix as we consider ourselves above the robots in some ways almost like slave masters and how in real life would we consider a sentient robot.


A sentient AI robot is also only a human, it can have hurt feelings

edit on 2016-9-16 by galien8 because: typos



posted on Sep, 16 2016 @ 04:54 PM
link   
a reply to: galien8

Military robots already exist in the form of drones. Once we manage to bestow autonomous control into our creations there will be no sure fire way of implementing these 3 laws without servery limiting any artificial intelligence's ability to think for itself. Cant have the the illusion of freewill while retaining control down to the fact that they contradict one another.

The best we can hope for really is that we teach our creations benevolence but with Man for a God i really don't see that happening.



posted on Sep, 16 2016 @ 06:07 PM
link   

originally posted by: andy06shake
a reply to: galien8

The best we can hope for really is that we teach our creations benevolence but with Man for a God i really don't see that happening.



I got a new insight through this thread: If there come sentient AI robots with a psyche a consciousness a sub consciousness with feelings with a soul, emulating human psychology, does it have the right to defend itself against humans against other robots against animals etc. etc. ???


edit on 2016-9-16 by galien8 because: extra info



posted on Sep, 16 2016 @ 06:34 PM
link   
a reply to: galien8

Essentially the question is should Human rights extend to include artificial intelligence? Kind of hard to convince Monkeys to give equal rights to tools. Just look at how we treat one another never mind the rest of the animals of our world.

Personally i think if the thing has the ability to empathize and think for itself it should indeed have similar rights to humans.

Thing is through our basic rights are being eroded away on a daily basis under the guise of maintaining our security and way of life. Chances are by the time humanity develops a true artificial intelligence(10-50 years distant) we wont have any rights remaining.



posted on Sep, 16 2016 @ 09:20 PM
link   
a reply to: galien8

I'm not a fan of Asimov's laws. I also don't agree with them for AI. They're thought provoking material but that's all, they're just not practical.



new topics

top topics



 
13
<< 1    3 >>

log in

join