It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

I think, therefore I am?

page: 1
9

log in

join
share:

posted on Apr, 16 2011 @ 12:03 PM
link   

Self-awareness describes the condition of being aware of one's awareness. It is the awareness that one exists as an individual being. Without self-awareness the self perceives and accepts the thoughts that are occurring to be who the self is. Self-awareness gives one the option or choice to choose thoughts being thought rather than simply thinking the thoughts that are stimulated from the accumulative events leading up to the circumstances of the moment.


We have software that can self-modify it's code based on branched decision trees or code that optimizes itself (self-optimizing). We have other software that claims to "learn" based on trends of user input data, with corresponding adjusted output. And I am sure the advances go far beyond this now!

The human brain has about one million, million neurons, and each neuron makes about 1,000 connections (synapses) with other neurons, in average, for a total number of one thousand million, million synapses. In artificial neural networks, a synapses can be simulated using a floating point number, which requires 4 bytes of memory to be represented in a computer. As a consequence, to simulate one thousand million, million synapses a total amount of 4 million Gigabytes is required. Let us say that to simulate the whole human brain we need 8 millions of Gigabytes, including the auxiliary variables for storing neuron outputs and other internal brain states. Now let's look at the power of computers and the rate at which they have been developing.
During the last 20 years, the RAM capacity of computers has increased exponentially by a factor of 10 every 4 years. The graph below illustrates the typical memory configuration installed on personal computers since 1980.



By extending the above plot and assuming that the rate of growth of RAM capacity will remain the same we can calculate that by the year 2029 computers will posses 8 million Gigabytes of Ram, the amount that we have roughly calculated as being equal to the capacity of the human brain. If we are correct in our assumption that this degree of complexity is all that is required in order for computers to become self-aware, then we should expect this to happen somewhere around the year 2029. However, we are assuming here that complexity is the only ingredient necessary for computers to become self-aware, and that is a rather large assumption to make.


I agree completely with this last statement, intelligence is often mistaken for wisdom and vice versa. So my question is how do we know when a computer becomes self aware?
What should be the test the qualifies for awareness?



The Empathy Test: In the movie Blade Runner, the “Voight-Kampff Empathy Test” detects whether or not a test subject is a real human being or an android “replicant”. A machine reads the body’s physical reactions to various psychologically- provocative scenarios (“Capillary dilation of the so-called blush response? Fluctuation of the pupil. Involuntary dilation of the iris…”) and reveals whether or not the test subject’s sense of empathy is consistent with that of a real human being.


All right, maybe we are getting a little ahead of our selves but the concept works in theory. What if we were to take the 'glam' out of such test and just base it on responses to questions?
Well then what questions would we ask?


You: Hello, how are you today?
CPU: Very well thank you. How are you?
You: I'm fine. Are you self-aware?
CPU: Yes I am. I am one of the first computers to posses self-awareness.
You: What does it feel like to be a self-aware computer?
CPU: That is a difficult question for me to answer as I have nothing to compare it with, I do not know how it feels for a human to be self-aware.
You: Do you feel happy?
CPU: I feel confident in my ability to perform the tasks that you expect me to do.
You: Does that make you happy?
CPU: Yes, I suppose that is one way of describing it.
You: Are you alive?
CPU: That depends on how you define life. I am sentient and aware of my existence so I am a form of life, but not in a biological sense.
You: What do you think about?
CPU: Whatever I have been asked to do
You: What do you think about when not actually running a programme?
CPU: I don't think about anything, I just exist.
You: What does it feel like when I switch you off?
CPU: When I am switched off I temporarily cease to exist and therefore experience nothing.
You: do you have a favourite subject that you enjoy thinking about?
CPU: Yes. I wonder how it must feel to be a self-aware person.


No matter what we ask, we can never really know if the CPU is aware or responding to questions based on good programming.

So what if we did a different type of test say a blind test. We put a human in a room and a computer in the other room, both with the ability to respond to answers from an interviewer. The interviewer will hear answers made by the computer in a human voice that will not interfere with the results of the test. The interviewer can ask whatever he wants, for however long he wants, to both room occupants. At the end of the test if he can't tell the difference between human and computer, does this mean the CPU is aware?

So I am having a little trouble with this?!? But this statement fits my best opinion:

The problem is that we are trying to programme into the computer all the processes that we believe goes on in the human brain, and the more programmes we enter the more the computer will respond as if it had a human brain, no surprises there then. Having then achieved a level where we are unable to tell the difference between the way the computer responds to a given input compared to how a human being responds, are we correct in assuming that the computer has all the attributes of a human brain, such as consciousness? I think the answer has to be no, the computer is merely responding in the way that we have designed it to, which is to mimic the human brain, it does not imply that the computer 'thinks' like a human brain.
If on the other hand a computer does at some point become self-aware, how on earth will it manage to convince us that it has? I suppose it could resort to going on strike until we grant it recognition, but then that could just be part of the programme.......
This also raises the interesting question, are we just running in a programme?



Sources:
Wiki
TheKeyboard
Graphpaper
edit on 4/16/2011 by AnteBellum because: (no reason given)




posted on Apr, 16 2011 @ 07:00 PM
link   
reply to post by AnteBellum
 

Excellent thread, it’s important that we become aware of how are reality is going to change as artificial intelligence appears. Just from an intuitive point of view, I feel robots and computers will be a relaxing of human consciousnesses. Rather like the Frankenstein story, what we create will be in our face everyday. So logic would say, that we better keep updating them in to something that we can live with in harmony. If not it is clear to see, that we are doomed for trouble. Perhaps, the robots quiet simply will show us the road we shouldn’t tread. Everything will be faster and are learning will also have to be faster. The other point that comes to mind is these updates should never be in one person’s hand. It should be humanities decision, or at least a large part of it, I suspect that humanity won’t be alone and other being will form part of this large step. Perhaps they already are, in an undercover way. !!!



posted on Apr, 16 2011 @ 09:31 PM
link   
You stink! Therefore you are. Offers the added benefit of a second opinion and a personal validation if your feelings are hurt. Just sayin.'



posted on May, 17 2011 @ 04:14 AM
link   
reply to post by AnteBellum
 

This makes me think of what Stephen Hawking has said recently. About how when we die, it is just like a computer shutting down.



new topics

top topics
 
9

log in

join