It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Self-awareness describes the condition of being aware of one's awareness. It is the awareness that one exists as an individual being. Without self-awareness the self perceives and accepts the thoughts that are occurring to be who the self is. Self-awareness gives one the option or choice to choose thoughts being thought rather than simply thinking the thoughts that are stimulated from the accumulative events leading up to the circumstances of the moment.
The human brain has about one million, million neurons, and each neuron makes about 1,000 connections (synapses) with other neurons, in average, for a total number of one thousand million, million synapses. In artificial neural networks, a synapses can be simulated using a floating point number, which requires 4 bytes of memory to be represented in a computer. As a consequence, to simulate one thousand million, million synapses a total amount of 4 million Gigabytes is required. Let us say that to simulate the whole human brain we need 8 millions of Gigabytes, including the auxiliary variables for storing neuron outputs and other internal brain states. Now let's look at the power of computers and the rate at which they have been developing.
During the last 20 years, the RAM capacity of computers has increased exponentially by a factor of 10 every 4 years. The graph below illustrates the typical memory configuration installed on personal computers since 1980.
By extending the above plot and assuming that the rate of growth of RAM capacity will remain the same we can calculate that by the year 2029 computers will posses 8 million Gigabytes of Ram, the amount that we have roughly calculated as being equal to the capacity of the human brain. If we are correct in our assumption that this degree of complexity is all that is required in order for computers to become self-aware, then we should expect this to happen somewhere around the year 2029. However, we are assuming here that complexity is the only ingredient necessary for computers to become self-aware, and that is a rather large assumption to make.
The Empathy Test: In the movie Blade Runner, the “Voight-Kampff Empathy Test” detects whether or not a test subject is a real human being or an android “replicant”. A machine reads the body’s physical reactions to various psychologically- provocative scenarios (“Capillary dilation of the so-called blush response? Fluctuation of the pupil. Involuntary dilation of the iris…”) and reveals whether or not the test subject’s sense of empathy is consistent with that of a real human being.
You: Hello, how are you today?
CPU: Very well thank you. How are you?
You: I'm fine. Are you self-aware?
CPU: Yes I am. I am one of the first computers to posses self-awareness.
You: What does it feel like to be a self-aware computer?
CPU: That is a difficult question for me to answer as I have nothing to compare it with, I do not know how it feels for a human to be self-aware.
You: Do you feel happy?
CPU: I feel confident in my ability to perform the tasks that you expect me to do.
You: Does that make you happy?
CPU: Yes, I suppose that is one way of describing it.
You: Are you alive?
CPU: That depends on how you define life. I am sentient and aware of my existence so I am a form of life, but not in a biological sense.
You: What do you think about?
CPU: Whatever I have been asked to do
You: What do you think about when not actually running a programme?
CPU: I don't think about anything, I just exist.
You: What does it feel like when I switch you off?
CPU: When I am switched off I temporarily cease to exist and therefore experience nothing.
You: do you have a favourite subject that you enjoy thinking about?
CPU: Yes. I wonder how it must feel to be a self-aware person.
The problem is that we are trying to programme into the computer all the processes that we believe goes on in the human brain, and the more programmes we enter the more the computer will respond as if it had a human brain, no surprises there then. Having then achieved a level where we are unable to tell the difference between the way the computer responds to a given input compared to how a human being responds, are we correct in assuming that the computer has all the attributes of a human brain, such as consciousness? I think the answer has to be no, the computer is merely responding in the way that we have designed it to, which is to mimic the human brain, it does not imply that the computer 'thinks' like a human brain.
If on the other hand a computer does at some point become self-aware, how on earth will it manage to convince us that it has? I suppose it could resort to going on strike until we grant it recognition, but then that could just be part of the programme.......
This also raises the interesting question, are we just running in a programme?