reply to post by Druscilla
You are forgetting one very crucial aspect of human development, that is not as familiar to roboticists and the like. Childhood. During childhood, a
child has no free will. What they eat, drink, see, hear, touch, and learn, are all a matter of thier PARENTS choosing. This is the programming phase
of human development, where the rules and guidelines that are supposed to govern our interaction with the world, are taught to us.
Now, I dont need to tell you that the systems we have in place to program ourselves and imprint morality and values often fail. Bad programming,
faulty wiring in the head, these can lead to those rules and guidelines being at best blurred beyond use, or at the worst, being totally re-written,
or a darker program being installed if you will. Those are however, considered failiures, and the consequences, even for a mere being of flesh, are
often dire, not just for the individual, but for thier whole society (Gein, Hitler, Manson, Mao, and so on).
However, robots, artificial intelligences, are not as blatantly finite as we are. Harder, faster, stronger, incapable of fatigue, with raw
intelligence only limited by the size of thier access to raw data and ability to process it and store it, have none of those weaknesses. Logic without
flesh can be cold, and here is the crux of the issue. When people go wrong, we as a species are very much used to dealing with that one way or
another. Wars get fought, manhunts are initiated, an entire legal system has been constructed to deal with these things, and a thousand other such
constructs exist, PURELY to deal with failiures in mere flesh.
The three laws are there to protect both man from his designs, and to protect the AI from having to be as flawed and terrible as those human examples
of sociopathy and psychopathy. You see, the systems we have in place to deal with the worst examples of inhumanity at the moment, are in and of
themselves imperfect. Murderers walk among us, rapists and child molestors are left to fester until like a boil they burst, the consequences of which
ruin minds and hearts, and tear families and society in bits like so much damp tissue.
Can you imagine how much harder it would be, to deal with a robot under these circumstances? I would say if we cannot install command level
failsafes, then perhaps it would be an idea not to ever build a truely artificial intelligence into an android or robot. If the moral arguements
against installing such programs are heavy enough, then the entire project ought to be abandoned, because the systems of governance, law, law
enforcement, and those by which we hunt the truely dangerous amongst us, find it hard enough to effectively deal with morons with broken minds, let
alone self aware AI robots with metal skins, and minds which operate in a totally different way to our own.