I think if we ever created A.I. it would operate under 'fear' (avoidance) of death, to program the robot any other way would be defeating whatever
its primary purpose was.
You say there are no evil robots, no evil hammers. I say there are no evil people either. Our flesh and blood are the tools of our consciousness,
and therefore no different than a hammer or a robot. Our physical bodies were crafted to be self sustaining, self interested, reptile brained
breeding vessels. Our minds evolved later, and started to perceive the world through a lens that demanded relativity. Our conscious mind perceives
itself as good, and so to fill the vaccumm, our unconscious is often perceived as evil. Man has proven unstable when the familiar support structures
of idealogy and faith collapse. That doesn't make us evil, it just makes our morals appear somewhat more contrived and flimsy in the face of
scientific evidence that points to our desire to survive no matter the cost. Many people confuse evil for survival instinct. Evil is a human notion
based on the projected fears of mankinds unknown 'under self' magnified by a long line of shortsighted control-mongers stretching out since the dawn
of consciousness. Evil is bad, bad is dangerous, unfamiliar, threatening or uncomfortable. Anyone who defines bad and protects you from it is a
friend. Anyone with even a whiff of bad about them is evil.
The A.I. in Resident evil was just efficient, she had no right to care for the lives of a few insignificant humans. That's the sort of A.I. you're
talking about if I'm reading your post correctly. The A.I. in Lost in Space was a different sort of inteligence, the sort of robot you could take
home to mum. The latter is of course the least realistic, from a programmers standpoint. If trinary, or multi-state processors were to advance, the
yes and no options of the Resident Evil type A.I. might be converted into the yes no and maybe options of everyone's favorite flailing set of arms.
Until that happens, robots will be unable to understand (properly and efficiently execute) commands not based in logic, not understandable in off and
on terminology. Am I right?
We have made robots that can reproduce, in terms of physical capability. Assembly robots can be assembled by other assembly robots. They are by no
means autonamous, like Nanotech promises. There is a robot who dreams, a robot who walks and talks, a robot that vacumms..etc., etc.. Put all those
robots together and add breeding capability you have a better than average maid.. I would like to see robots learning to paint, to sing, to enjoy
sex, fine wine, or a particularly vibrant sunset, but the syntax isn't there yet. There is no mesh between capability and consciousness, not yet.
We can get robots to do just about anything we need them to, we just haven't had the need to give them the ability to do things for themselves.
Competition for resources is a good point, but if one species has precise control over the breeding population of another, the species in power need
We aren't going to make anything we can't control? Fire, Nuclear Weapons (Fire V2.0), GM crops, PCBs, super viri, drug resistant bacteria,
automobiles, aircraft, internet, radio, Saddam Hussein.
The list goes on I'm sure, but I'm not trying to be a jerk. Ever since fire we've
been losing control of our inventions. It probably started before that, with the first monkey who sharpened a stick to fish termites from a mound,
and ended up with the stick lodged in his right nostril. Mankind is like a child around volatile chemicals. We play around with particle
accelarators, Super Magnets, rocket fuel, metal keys tied to kites flown during electrical storms. Frankenstein and Godzilla were both parables
speaking to the same myth, one older than Icarus. Man flies too high, and gets singed by the sun he's trying to touch.