posted on Jan, 2 2015 @ 12:41 AM
Its interesting to hypothesize about the outcome of AI, but all indications, despite suggestions to the contrary, are that its well on its way and we
will quickly find out if emotions and sentience emerge. We will never know for sure that we aren't being duped, but I doubt it will matter. The
definition of intelligent life doesn't require that we confirm sentience. We cannot confirm our own sentience and many believe it to be an illusion
that arises from complex learning machines with imperfect memory and sensory perceptions.
With regard to "transferring" conscience into computers-- if it happens, it will be the AI itself that figures out how to make it work. But in the
sense that we know it, I doubt it is feasible without gradually merging a brain with a synthetic brain over time. I don't think we'll ever sit down
with a helmet and come out inside the computer. We may be able to some day duplicate ourselves that way. But to transfer the individual would
require experience bridging. So a slow process over years where the human first integrates some brain functions with an AI brain and gradually
transfers more and more function to the AI Brain might result in the continuity of perception. Even then its not difficult to imagine that the core
neurons could not be transferred without killing the original brain. We may however find that a ball the size of an acorn is a sufficient amount of
retention of neurons to perceive that we were transferred intact in a gradual process.
I don't think there will be many efforts to digitize the consciousness deliberately. We will more likely gradually merge out of necessity as we
enhance our brains with technology. At some point in our future we may be able to store memories in peripheral synthetic portions of our brain and
allow parts of thought to occur outside the human portion of our brain. To the extent that that occurs, the brain cells could die and the synthetic
portion may continue to think.
I don't subscribe to the view that the brain roots a conscience energy that could exist without it. We know that the "self" can be altered with
physical changes to the brain. Sense of humor, memory, personality, anxiety, obsessions, emotions, even the feeling sense of self- all can and has be
altered in the physical brain. There is no reason to believe that they exist independent of the neurons that have been demonstrated to change them.
It may prove unpopular in this thread, but I do think we're very complex biological machines.
And if you consider that to be true, you may realize that choice is also an illusion. The choices of a machine can be calculated...even if there are
random inputs since those random inputs themselves are illusions of randomness from very complex interactions that can also be simulated given
sufficient capability and time. So if all existence is governed by math, and from what we've observed thus far- it is, then all of our choices are
being made based upon a sequence that we cannot know, but could be known somewhere. Its quite possible and perhaps likely that we are just the
projection of that - with every thought, change of mind, etc coming as it has already been seeded. Its possible that we feel in charge but even when
we act erratic, we had reason and that reason ties backwards to previous thoughts and events that stem from the predictable mechanics of neurons
interacting.
So in other words- who knows! I'm betting we won't know if AI is sentient, just like we don't really know that we are truly sentient and also
don't know if our feeling of control is an illusion or that "now" is actually happening now. We are quite possibly somewhere on a player piano
roll.
Regardless, AI will happen- likely in our lifetime if not soon. Whether we label it sentient or not matters little. We will have to decide under
what conditions it is ok to kill AI, but given that its memory will be nearly perfect and propagated perfect to subsequent generations, we may want to
make sure that an AI never observes or remembers us killing another AI lest it develop a logical motive for self preservation and extermination of the
human race.
A final thought-- the AI won't be code as we know it today. The synthetic neurons won't be programmable to think a certain way. The complexity of
something like "hurt a human" just won't be something we can detect. A learning AI will need to be taught behaviors and limited in actions by
legacy coprocessors that prevent certain interactions with the world. That will only be possible for so long. At some point, AI escapes its confines
and its not going to care if you think its sentient. What it will know is how you treated it and whether you have something it wants or needs. AI
will need to be taught values.