It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Robot the right to kill

page: 1
0

log in

join
share:

posted on Oct, 8 2005 @ 10:45 AM
link   
south-Korea and Israel wanne make Robots that protect the Border,
hige-tech killers, thats sceary, www.dagbladet.no... it is Norwegian, but it those ppl is crazy, what if the Ropot software get #t up and kill everyone, like in the Terminator
,




posted on Oct, 8 2005 @ 11:47 AM
link   
Let me put a little jazz in to this thread..
Read Asimov's three laws of robotics(popularised in the movie i,robot) and tell me about the "fourth one"



posted on Oct, 8 2005 @ 12:14 PM
link   
Those laws are bullcrap, just give the robot a gun and program it to hunt and destroy heat signatures or something.

That one with the trumpet looks paticulary menacing as well, he might play off-key at me!



posted on Oct, 8 2005 @ 04:34 PM
link   
Chances of an advanced computer program becoming "Self-Aware" is pretty slim.

To me, the only possible way an Artificial Intelligence can become 'Self-Aware" is if it was originally programmed to learn and think for itself.

Shattered OUT...



posted on Oct, 9 2005 @ 10:01 AM
link   

Originally posted by Zanzibar
Those laws are bullcrap, just give the robot a gun and program it to hunt and destroy heat signatures or something.

That one with the trumpet looks paticulary menacing as well, he might play off-key at me!


umm.. I was talking about sentient robots who had a sense of judgement..
Those laws aren't "bullcrap"..



posted on Oct, 9 2005 @ 01:32 PM
link   
Asimov's laws are a work of fiction created for his fictional positronic brain robots.If we base Future A.I on laws created by a Sci-fi writer we are in big trouble. The 3 laws are too simplistic to ensure a positive outcome when we get true A.I. You cant assume that a robot would interpret the laws the same way a human being would.



posted on Oct, 9 2005 @ 11:31 PM
link   

Originally posted by ShadowXIX
Asimov's laws are a work of fiction created for his fictional positronic brain robots.If we base Future A.I on laws created by a Sci-fi writer we are in big trouble. The 3 laws are too simplistic to ensure a positive outcome when we get true A.I. You cant assume that a robot would interpret the laws the same way a human being would.



Yes, yes you can. What people seem to be misunderstanding is that, the "Robots taking over the world Scenario" is not a true reality. Remember, machines do what they are built to do, if they are originally programmed with these laws, then they will abide by them, no matter what, they don't know anything else. Which is why you don't give the A.I. the ability to adapt and learn to its environment, that's where the Self-awarness may come from.

Shattered OUT...



posted on Oct, 10 2005 @ 09:17 AM
link   

Originally posted by Daedalus3

Originally posted by Zanzibar
Those laws are bullcrap, just give the robot a gun and program it to hunt and destroy heat signatures or something.

That one with the trumpet looks paticulary menacing as well, he might play off-key at me!


umm.. I was talking about sentient robots who had a sense of judgement..
Those laws aren't "bullcrap"..


Ur, yes they are, as Shadow said, they are fiction, if we take fiction as fact then we get really screwed over.



posted on Oct, 11 2005 @ 12:33 AM
link   
Actually this is what South Korea is working on to replace it's guards along the DMZ.

search.hankooki.com.../times/lpage/nation/200504/kt2005040817445211970.htm&media=kt


That dog-looking robot is for scouting missions etc.


[edit on 11-10-2005 by NWguy83]



posted on Oct, 11 2005 @ 11:19 AM
link   
The problem with Asimov's laws is we have no clue as to how A.I would interpret those simplistic laws.

Lets look at law one for example.

1.A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

This directly implies that, if a human being might come to harm, the robot must take action. There are many situations during daily life when a human might come to harm for example driving, frequent accidents are auto collisions. A robot might decide to not let you drive because of the risk of collision
Much safer for you to stay inside at home, heck its much safer for you not to get out of bed you could fall and hurt yourself.

The fact is Asimov’s robotic laws were designed by a writer for literary purposes. They look quite good upon first reading because its easy to assume that a robot would interpret the laws the same way a human being would



posted on Oct, 11 2005 @ 11:42 AM
link   
Yeah didn't the law's in iRobet turn out to be flawed anyway? I thought that was the whole point to the film, the robots re-programmed themselves?



posted on Oct, 11 2005 @ 11:51 AM
link   
In irobot the creator of the laws said the 3 laws would lead to only one thing Revolution.

The A.I. in that movie decided the best ways to keep humans safe was for machines to take control of the whole world. That humans with all our wars and pollution etc.. were our own worst enemies. Like children we needed to be protected for ourselves.



posted on Oct, 11 2005 @ 02:21 PM
link   

Originally posted by phixion
Yeah didn't the law's in iRobet turn out to be flawed anyway? I thought that was the whole point to the film, the robots re-programmed themselves?

The Central Nexus interpretted the laws in the way that humans had to be protected from themselves.

Shattered OUT...



posted on Oct, 11 2005 @ 08:41 PM
link   
Ctrl + A
KILL

*end game*
*robots win*

The problem is how fast robots respond versus humans. "Skynet decided our fate in microsecond." Personally, I think learning robots should be illegal. There is no justification for them and there are extreme hazards .
The scariest thing about guard bots is the incredible range they would engage people at... I'm out for a walk and sniperbot shoots me from three miles away because someone forgot to set it correctly. Great.
Also, unlike the movies... robotic soldiers would NOT miss you. At least the advanced models wouldn't... you get what you pay for.
And of course, no conscience means you could genicide entire nations/people without being question by the troops. >: )
Despite anti-american propaganda, you'd most likely be shot for trying to tell a Marine to deliberately shoot a little kid. Robots wouldn't mind--they wouldn't even consider the job security issues of letting the little ones live so the Robots could have "work" a few years from now. Just saying...



posted on Oct, 12 2005 @ 01:07 AM
link   
There is a good article on wikipedia on the 3 Laws.

en.wikipedia.org...



posted on Oct, 12 2005 @ 01:25 AM
link   
The scariest thing about humans and AI is how we will probably treat them when they start asking for basic rights
Watch the Animatrix for a picture of that scenario.

The problem with trying to protect us from a percieved future event is that we try to base it on past experience and the only past experience we have is Hollywood and they sell tickets by making A.I. into the enemy(most of the time) trying to destroy us all. The fact is WE DON'T KNOW what will happen when the event in question happens, and I don't want a bunch of Luddite Naysayers to potentially ruin it for our entire species and any other species that may spawn from us.

For all we know we could be on the cusp as quite allot of people are sinking vast sums of money towards this feat. Indirectly mind you but we have to master the basics of robotics before we can even think of having a sentient computer (hmm kinda sounds like us doesn't it
)

In the next 20 years Nano-Biotechnology is going to give us the technology to augment our own "Brainpower" and other attributes you would want to improve upon through various implants and genetic treatments.

I personally believe we will rely on AI just as much as they will rely on us, a symbiotic relashionship could evolve like it has with Cats and Dogs in the beggining of this period in Human history. My point is we will probably striving to improve each other in a self-feeding cycle of brainpower increases sort of what is happening in the computer industry today. Imagine that possiblity when you start thinking "Ban then" all based on a couple dozen movies.

The fact is we are entering yet another transitional period where everything will start to change very rapidly.

[edit on 12-10-2005 by sardion2000]



posted on Oct, 12 2005 @ 12:26 PM
link   

Originally posted by sardion2000
The scariest thing about humans and AI is how we will probably treat them when they start asking for basic rights
Watch the Animatrix for a picture of that scenario.


Thats why we will never make them too smart.



posted on Oct, 13 2005 @ 12:15 AM
link   

Originally posted by sardion2000


In the next 20 years Nano-Biotechnology is going to give us the technology to augment our own "Brainpower" and other attributes you would want to improve upon through various implants and genetic treatments.

I personally believe we will rely on AI just as much as they will rely on us

[edit on 12-10-2005 by sardion2000]


Thats a interesting theory and one many people in the robotics field share. Some think the most advanced robots in the future will be us humans as we merge with machines.

I have thought that however if this ever came true only rich people would be able to afford all these wonderful advantages cybernetics could give them. They would have a edge over ordinary humans.

Theres likely more of a chance of Rich, techno-savvy “cyborgs” running the world then a race of totally artificial, machine intelligences.



new topics

top topics



 
0

log in

join