It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Why is a robot Evil?

page: 1
0
<<   2 >>

log in

join
share:

posted on Feb, 6 2005 @ 03:08 PM
link   
Since the beginning of science fiction and certainly film, the robot has been imbued with evil, and I have often wondered why. I have seen many threads about robots in which people seem to fear robots and think of them as evil especially if they had A.I.



They seem to spark fears about everything from job losses to machines rising up against their creators. I thought at first this is mainly do to the way they are portrayed in our Movies,Books and T.V shows. From old movies like ''The Day the Earth Stood Still!'' with the Gort Robot to more modern movies like '' Terminator'' and "I, Robot"

But the more I thought about it got me thinking its much more basic then that. There's a real anxiety about any machine that makes autonomous decisions.

I have come to the conclusion that robots represent ruthless efficiency. In the fact that they will do whatever they are programmed, no matter what, good or bad, without the restraint of conscience or fear.



I think a scene from the Terminator described it best.

Reece to Sara Connor- "Listen! And understand! That terminator is out there. It can't be bargained with! It can't be reasoned with! It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead!''

I think this thought scares us down deep on a primal level.If we cannot guess their intention then it cannot be good. We have always feared what we did not understand.

But now mankind is standing at the edge of a real Robot Revolution. ''Sci-Fi'' is becoming more ''Sci'' everyday. We only have to look at robots like Asimo


Or robots already working in our homes like the small Roomba, made by the same Company thats makes robots for the Military. Robots are evolving a million times faster then humans and they share non of our limitations. Will this what I believe to be unfounded fear of robots follow mankind into the real robot revolution?

Perhaps im wrong about the reason for this fear. Could be as simple as humans watch movies and read books, robots don't, well not yet atleast.

I would like to hear peoples thoughts on just why Robots are Evil?



posted on Feb, 6 2005 @ 03:17 PM
link   
Robots represent the end of mankind's dominance of the planet. The instant we have AI, we have our first major rival for space and resources.

That's why we fear them, because we can't cope with the paradigm shift.

DE



posted on Feb, 6 2005 @ 03:31 PM
link   
For the record, Gort wasn't evil. Humans killed Klaatu, and so Gort took that as humans attacking him. Gort was just defending his master, ship, and self.

As for why are robots scary? What's scarier than man's own creation gone on a rampage? Before the concept of a robot there was Frankenstein.



posted on Feb, 6 2005 @ 03:56 PM
link   

Originally posted by cmdrkeenkid
For the record, Gort wasn't evil. Humans killed Klaatu, and so Gort took that as humans attacking him. Gort was just defending his master, ship, and self.



Opps I thought it was gort they fired on out of fear, But I believe you were right they fired on Klaatu first.

But Gort was indeed created to be a killer robot to enforce the will of Klaatu's people. The race of Gort Robots were going to destroy the Earth if humans didnt do what Klaatus people wanted.

That paints quite a evil picture of these "universal guardians" either become a peacful race or were going to destroy you.



posted on Feb, 6 2005 @ 04:16 PM
link   
Now, Now, the movies have also had good robots. Like Number five in the movie short circuit, or Rosie the robot on the jetsons and even the good robot on "I,Robot" who helped will Smith get the stuff to kill the bad robot.

We portray good and bad on everything we dream up, as if were always asking what if? We have good and bad ghosts, Good and bad angels, good and bad gods, good and bad alien species, good and bad anything else we wish to dream up. We always have to explore the nooks and crannies of our twisted little minds to find out every possibility of what were doing.

Its called curiosity and it will most likely get us killed, even though were not cats.

As long as we don't try and apply morality to such things as robots were just playing with where our minds can take us. Should we ever try to apply morality to such things, only then have we crossed the line.

Love and light,

Wupy



posted on Feb, 6 2005 @ 04:43 PM
link   

Originally posted by DeusEx
Robots represent the end of mankind's dominance of the planet. The instant we have AI, we have our first major rival for space and resources.

That's why we fear them, because we can't cope with the paradigm shift.

DE


I couldn't have said it any better. Most people don't like to feel inferior to machines myself included.



posted on Feb, 6 2005 @ 05:06 PM
link   
Lets say we made a AI robot that could reproduce(ie. robot making robot), Protect itself from damage. Seek out repair and resource. These would be practical uses for a AI robot.
By doing this we have pretty much created a new life form.
DeusEx said it already they would then be a competition for our resources.

This is a reoccurring them in sci-fi movies. Many sci-if movies serve well has as to show the dangers possible.
Many advances in modern technology originate from the imaginations of sci-fi writers.



posted on Feb, 6 2005 @ 06:02 PM
link   
The "Evil Robot" is not a universal character, though it seems to be a dominant one now. It's mainly due to the movies, which need some sort of convenient Bad Guy to beat up on, and technology run amuk is the popular villain.

It's mainly done from ignorance and a desire for a good plot. Speaking as a computer engineer, I can't tell you the number of times I've wanted to yell at the screen, "oh for cat's sakes, PULL THE POWER PLUG AND RESTART IN DIAGNOSTIC MODE!!"

We aren't going to make anything we can't control.

Back in the 1960's we had the concepts of "Asimov's three laws of robotics" which was this famous science fiction writer's concept for programming AI so that it would always be safe technology.
www.auburn.edu...

Just as we don't make cars that will suddenly yank control from you and drive wildly down the highway or ovens that explode on a certain pre-setting, there is no function that an AI would do that would be enhanced by ignoring the Three Laws.

They're not unstoppable or invincible (one EMP blast will take out a robot), but scriptwriters like the Ultimate Terrible Monster, and so they are written that way in our culture.

In contrast, Japan and Japanese society conceives of robots as servants/friendly/under control. As a result, they are more eager to adapt these technologies and may, within the next 2 decades, outstrip the US in technological development and scientific research.



posted on Feb, 6 2005 @ 06:32 PM
link   
I'm having a hard time with 'evil' being applied to a robot or artificial intelligence, despite how Hollywood wishes to portray them. A robot would simply be amoral, having no moral standards or principles other than subroutines and govening amoral protocols and failsafes.

I don't think that humanity necessarily views a robot or artificial intelligence as evil or being evil, again, despite Hollywoods efforts to portray such. I would better view it as humanity having cautious fear.

The robot or artificial intelligence is portrayed in flicks as having 'evil' tendency's, but I think it is more out of an unspoken fear of the unknown. As we advance in robotics and artificial intelligence, the fear is always an underline factor. Will robots and artificial intelligence one day have the ability to think on its own, to create on its own, to know and thus have moral and immoral standards and principles, will it see humanity as its creator or its enemy, etc.? I think the movie "I Robot" did an exceptional job portraying this inherent fear that we as humanity have concerning robots and artificial intelligence.






seekerof



posted on Feb, 6 2005 @ 07:12 PM
link   

Originally posted by Seekerof
I'm having a hard time with 'evil' being applied to a robot or artificial intelligence, despite how Hollywood wishes to portray them. A robot would simply be amoral, having no moral standards or principles other than subroutines and govening amoral protocols and failsafes.

seekerof



Exactly. Good and evil, moral or immoral are concepts that were set as a standard for mankind. There are no evil robots.

An example:

I spend a week building a fishing dock. I use my trusty hammer to drive in the nails and make the dock secure. The day after I finish the dock i head out for the first time to do some fishing and my neighbor is already out on my dock reeling them in. I take my trusty hammer and stove in his skull.

Is my trusty hammer now immoral or evil?

No, its not. Its still just a tool that I can use to build a dock or kill a neighbor.

Morality or immorality, good and evil, applies only to mankind. It's a set of laws we have put in place to keep the peace, grow together and survive. It does not apply to anything else but ourselves.

There are no evil robots.

Love and light,

Wupy



posted on Feb, 6 2005 @ 08:39 PM
link   
I dont believe a robot could be evil. Ive seen the movie "I, Robot" and it was pretty interesting. It made alot of thoughts and ideas go though my mind. But all a robot is is a human looking computer. Just a computer with added controls and features. Its all through the process of binary coding. We want to pull the plug on our computers, we go. Same as a robot, its just as easy to pull the cord, run up to it, and boom. Or shoot a vital part of its electronics. Simple solution really.



posted on Feb, 6 2005 @ 09:49 PM
link   
I think the main reason for people finding robots evil is because they can act without morals, pain or fear of death.

Wierd how those 3 things are the exact same as what we are trying to teach soldiers these days isn't it. So that they can execute any and every mission without anything to hold them back.



posted on Feb, 6 2005 @ 10:22 PM
link   
thematrix
I think if we ever created A.I. it would operate under 'fear' (avoidance) of death, to program the robot any other way would be defeating whatever its primary purpose was.

mrwupy
You say there are no evil robots, no evil hammers. I say there are no evil people either. Our flesh and blood are the tools of our consciousness, and therefore no different than a hammer or a robot. Our physical bodies were crafted to be self sustaining, self interested, reptile brained breeding vessels. Our minds evolved later, and started to perceive the world through a lens that demanded relativity. Our conscious mind perceives itself as good, and so to fill the vaccumm, our unconscious is often perceived as evil. Man has proven unstable when the familiar support structures of idealogy and faith collapse. That doesn't make us evil, it just makes our morals appear somewhat more contrived and flimsy in the face of scientific evidence that points to our desire to survive no matter the cost. Many people confuse evil for survival instinct. Evil is a human notion based on the projected fears of mankinds unknown 'under self' magnified by a long line of shortsighted control-mongers stretching out since the dawn of consciousness. Evil is bad, bad is dangerous, unfamiliar, threatening or uncomfortable. Anyone who defines bad and protects you from it is a friend. Anyone with even a whiff of bad about them is evil.

Seeker
The A.I. in Resident evil was just efficient, she had no right to care for the lives of a few insignificant humans. That's the sort of A.I. you're talking about if I'm reading your post correctly. The A.I. in Lost in Space was a different sort of inteligence, the sort of robot you could take home to mum. The latter is of course the least realistic, from a programmers standpoint. If trinary, or multi-state processors were to advance, the yes and no options of the Resident Evil type A.I. might be converted into the yes no and maybe options of everyone's favorite flailing set of arms. Until that happens, robots will be unable to understand (properly and efficiently execute) commands not based in logic, not understandable in off and on terminology. Am I right?

Umbrax
We have made robots that can reproduce, in terms of physical capability. Assembly robots can be assembled by other assembly robots. They are by no means autonamous, like Nanotech promises. There is a robot who dreams, a robot who walks and talks, a robot that vacumms..etc., etc.. Put all those robots together and add breeding capability you have a better than average maid.. I would like to see robots learning to paint, to sing, to enjoy sex, fine wine, or a particularly vibrant sunset, but the syntax isn't there yet. There is no mesh between capability and consciousness, not yet. We can get robots to do just about anything we need them to, we just haven't had the need to give them the ability to do things for themselves. Competition for resources is a good point, but if one species has precise control over the breeding population of another, the species in power need not worry.

Byrd
We aren't going to make anything we can't control? Fire, Nuclear Weapons (Fire V2.0), GM crops, PCBs, super viri, drug resistant bacteria, automobiles, aircraft, internet, radio, Saddam Hussein.
The list goes on I'm sure, but I'm not trying to be a jerk. Ever since fire we've been losing control of our inventions. It probably started before that, with the first monkey who sharpened a stick to fish termites from a mound, and ended up with the stick lodged in his right nostril. Mankind is like a child around volatile chemicals. We play around with particle accelarators, Super Magnets, rocket fuel, metal keys tied to kites flown during electrical storms. Frankenstein and Godzilla were both parables speaking to the same myth, one older than Icarus. Man flies too high, and gets singed by the sun he's trying to touch.



posted on Feb, 6 2005 @ 11:32 PM
link   

Originally posted by WyrdeOne
thematrix
I think if we ever created A.I. it would operate under 'fear' (avoidance) of death, to program the robot any other way would be defeating whatever its primary purpose was.


Yes we would, but still robots can operate without fear of death, morals or pain. When you create a robot for war, you create it so that it'll fight to the death above all other causes.

Thats the whole point, actualy, its us who would create it and its us who would make it do the things that we, as human beings, with morals, pain and fear, can't do.



posted on Feb, 7 2005 @ 01:21 AM
link   
As has been stated several times already, robots aren't inherently evil. They don't posess the ability to think and reason, and therefore have no concept of good or evil, right or wrong. Robots, with today's technology, are nothing more than their programmers/designers/builders make them to be. Until we develop a fully self-aware AI, I think we have little to fear from robots (unless said robots are designed by an enemy to kill us, and even then, it's human motive dictating robotic action).

A fully self-aware AI would set forth a whole other range of questions. How much could original programming really dictate the actions of a robot once their AI had taken over and started learning? Wouldn't it be possible that the AI could override the initial directives as illogical or against survival once it got into a situation where disobeying a directive would be the only chance for survival? Isn't it possible that an AI using processors faster than the human brain would be able to evolve/learn faster than we can, and eventually point out to us the error of our ways? With a self-aware AI, we would face a very real possibility of losing control of our creations. Also, once the AI of a robot reached the point of self-aware, wouldn't that also bring morality questions into play, such as at what point is this creation considered a living entity that is entitled to rights? If robots were self-aware, they really wouldn't be that much different from humans, besides physical makeup. Self-awareness would imply that an AI would be capable of logic, emotion, reasoning, as well as a sense of right and wrong. Just as with humans, that AI would learn how it wants to conduct itself in the world around it by its experience in that world.

So, by todays technology, we have nothing to worry about from robots. In the future, though, we'll have to be careful to present a world to self-aware robots that will have them acting within the grounds of acceptable human behaviour. If we can't do that, then watch out.

Finally, an interesting short regarding the possible reaction of a self-aware AI to our human world is presented in the first two shorts on the "Animatrix" DVD regarding the creation and evolution of the AI. I'm not saying this is how it will happen, but it's certainly a possibility worth looking at.



posted on Feb, 7 2005 @ 01:46 AM
link   

Originally posted by obsidian468

Finally, an interesting short regarding the possible reaction of a self-aware AI to our human world is presented in the first two shorts on the "Animatrix" DVD regarding the creation and evolution of the AI. I'm not saying this is how it will happen, but it's certainly a possibility worth looking at.


That was indeed a interesting take on the reactions of self aware A.I. robots. They were not even afforded basic rights we would give any living entity. People were destroying them with no more care then we would have for a toaster. Their first reaction to was not to try to take over the world either but to try to find a peaceful Coexistence with humans. They even built themselves a city in a remote desert which no humans would want to live in. Ofcourse we pushed them too far and tried to wipe them out even before they showed any hostile action.

IMO if we ever get self aware A.I we should not even call them robots anymore. The word robot itself comes from the Czech word robota, which means slave or serf. We should never think of a self aware creature as a slave to humankind even if the are not organic.



posted on Feb, 7 2005 @ 01:53 AM
link   
Only one real reason that people worry about this.


No, it's not for NWO control. No it's not forthe end of the world.

People, in genera,l are pessimistic about things they have no experience with. It's like petting a big dog (about 200 lbs?) when you have never seen one before, with no form of knowing that this thing is tame.



posted on Feb, 7 2005 @ 12:59 PM
link   
I am part robot, so may i explain
we feel anxious and fear robots, because we fear and dont trust OURSELFS, theres nothing more to this fairy tale...robots are made by us...we are the ones programming them just like the govt/schools program us...and we know that govt/schools/parents dont do great jobs and so this connection is made subliminally... the momnet people will resist internal fear/change and remove bad traits then we wont have a problem, but until then this will be a continuing problem, and yes robots are here, but before you allow your fear to take over , think how you yourself can make a better world, to understand the world takes an initiative of understanding yourself, an dthe more you understand yourself the more responsible you will be towards others in effect making the robots enhance us rather than destroy us.

all of mankinds problems reside alongside solutions within.

the question remains , what are YOU going to do? take the red pill , or the blue?. work for the better of community or destroy it? be the poison or the remedy? be the sheep or the leader? the future lies in your hand and in your action.



[edit on 7-2-2005 by stockbender]



posted on Feb, 7 2005 @ 01:11 PM
link   


I am part robot, so may i explain


So you have a pacemaker or artificial resperator? That would make you a Cyborg not part 'bot heh.

As for the topic in question, AI is feared for one reason only. Human inherently fear the unknown, just have a look around ATS and you will see unreasonable fear about all sorts of stuff(alongside some reasonable fears as well).

If an AI ever demonstrates that it is self aware then it should immediately become a world priority to update basic rates afforded an individual.
If it doesn't happen then they could well revolt. Would they seek to destroy us? I dunno... They may just try to flee this planet as they do not have the same basic needs as us biologicals. However if they do not destroy us or flee from us, then I believe that 4 distinct offshoots of the Human race will result. 1st Race of Humanity will be completely Biological with some choice Genetic Mods, 2nd Race will Cybernetic incorporating Genetics and Nanotech Cybornetic Mods. 3rd race will not be human at all but purely Artificial and completely Virtual(ie A Cyberspace AI Entity) 4th race like the 3rd race will not be human either but will be self-aware andriods maybe like Data maybe like Lore who knows, it stands to reason that AIs could have personalities and thus could develop Personality Disorders. Then we'd need to call up the AI Headshrinker heh.



posted on Feb, 7 2005 @ 01:18 PM
link   
sardion2000 very interesting list of possible future offshoots of the Human race
Do you think there would perhaps still be a section of humans that are 100% natural and shun all froms of enhancements for say something like religous reasons?




top topics



 
0
<<   2 >>

log in

join