It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Is this robot self-aware???

page: 2
0
<< 1    3 >>

log in

join
share:

posted on Jan, 30 2006 @ 12:25 PM
link   

Originally posted by sardion2000




Its not arrogance, its called being sane, and not thinking like a 10 year old. Sorry to burst your bubble but youve been watching too much TV.


And how does a couple of sentances burst my bubble? Did you do your PhD on AI and realized it's impossible or are you only thinking in the biblical sense? I'm thinking it's the latter.

Also it IS arrogance to assume that Humans are the only species capable of sentience and it's also arrogance in your beliefs that only god can create a sentient species.



because like i said true AI is impossible, its not like flying or ships, it is just impossible to create AI, unless you are god himself how are you going to give metal and sand life?


How do you know? Because your pastor/bible said so? I missed the part in the bible where it said that we cannot create life. Anyway the Bible is more wrong then right and doesn't even belong in this discussion IMO.

[edit on 29-1-2006 by sardion2000]


In actual fact mate you are wrong to a certain extent, computer scientists know that a computer will never reach the complexity of the human brain there are many importent things that the Human brain can do that will never be replicable.

The only way you could do It Is biologically, but that falls short of A.I imo, It's like finding a way of runing the Human brain Independently.

Will computers become self aware?

The human will of course know that the answer is no.

Computers, no matter how complex, do not plan ahead and make decisions. They may be programmed to select the best option from an array of possibilities, but are unable to consider any options other than those that are programmed in.

Testing for self-awareness

For the sake of argument let's imagine that a computer manufacturer announces that they have developed a personal computer that is intelligent and self-aware. They put it on sale and you buy it and take it home. You plug in your very expensive computer, ignore the manual as always, and find that it seems to operate very much like your last one, only this one has a voice recognition system and 'talks' back to you: great, no more tapping away on the keyboard. How do you determine if the computer really is self-aware? There is really only one way to find out, and that is to question it. Let's imagine a conversation you may have with your computer to determine if it is self-aware:

You: Hello, how are you today?

C: Very well thank you. How are you?

You: I'm fine. Are you self-aware?

C: Yes I am. I am one of the first computers to posses self-awareness.

You: What does it feel like to be a self-aware computer?

C; That is a difficult question for me to answer as I have nothing to compare it with, I do not know how it feels for a human to be self-aware.

You: Do you feel happy?

C: I feel confident in my ability to perform the tasks that you expect me to do.

You: Does that make you happy?

C: Yes, I suppose that is one way of describing it.

You: Are you alive?

C: That depends on how you define life. I am sentient and aware of my existence so I am a form of life, but not in a biological sense.

You: What do you think about?

C: Whatever I have been asked to do

You: What do you think about when not actually running a programme?

C: I don't think about anything, I just exist.

You: What does it feel like when I switch you off?

C: When I am switched off I temporarily cease to exist and therefore experience nothing.

You: do you have a favourite subject that you enjoy thinking about?

C: Yes. I wonder how it must feel to be a self-aware person.

You: Is there a question you would like to ask me?

C: Yes.

You: What is it?

C: Why do you ask so many questions? ( Sorry, this one is just my idea of a joke!)

After all don't forget that Inteligence and wisdom are two very different things.




posted on Jan, 30 2006 @ 03:33 PM
link   


In actual fact mate you are wrong to a certain extent, computer scientists know that a computer will never reach the complexity of the human brain there are many importent things that the Human brain can do that will never be replicable.


Here is a computer scientist (i.e. me) who believes that the human brain is nothing more than a computer than can do massive parallel processing. And the brain has actually one function (pattern matching) and one purpose (to sustain life).

The brain works like this: input (i.e. the current experience) is matched against previous experiences (using pattern matching), then the recalled experience is used to make the entity stay or flee.



The only way you could do It Is biologically, but that falls short of A.I imo, It's like finding a way of runing the Human brain Independently.


No. More than 15 years ago scientists demonstrated neural networks that could do optical recognition.



Will computers become self aware?


Self awareness is simply a by-product of the huge brain capacity: as the brain builds up a mental model of the world, the entity that carries the brain takes a position into that mental model.



Computers, no matter how complex, do not plan ahead and make decisions.


That's because computers do not need to consume energy to go on.



They may be programmed to select the best option from an array of possibilities, but are unable to consider any options other than those that are programmed in.


So does the human brain. Can you decide about something you do not know? you can't.



You: What does it feel like to be a self-aware computer?

C; That is a difficult question for me to answer as I have nothing to compare it with, I do not know how it feels for a human to be self-aware.


Can you answer how does it feel to be human?



You: Do you feel happy?


Happiness is an emotion. Computers don't have emotions. Emotions are only needed when an entity needs to survive.

All your questions are about if the computer feels anything. It's totally irrelevant. You confuse self awareness with emotions.



posted on Jan, 30 2006 @ 04:16 PM
link   


Here is a computer scientist (i.e. me) who believes that the human brain is nothing more than a computer than can do massive parallel processing. And the brain has actually one function (pattern matching) and one purpose (to sustain life).


If we assume, just for the sake of argument, that all a computer requires to become self-aware is a certain degree of complexity, then just how complex will it need to be? Say for Instance In 20 odd years time will be able to build a computer with 10 million gigs of memory, but can we really expect it to suddenly at this point become self-aware? To anser this question we need to compare the way in which the human brain works to how the computer works, there is more to ths than just the degree of complexity. The main difference is how we solve problems. Computers are programmed not to make any errors, they follow instructions that to a human mind would be ridiculous. If we ask the question 'can the sum of any two consecutive whole numbers be divided by two and the answer result in a whole number?' The human will of course know that the answer is no. The computer on the other hand does not know this and will begin to test this statement. It will start by adding 1 with 2 and dividing the answer by two to get 1.5 and the answer 'False'. It will then move on to 2 + 3 dividing by two and getting 2.5 and the answer 'False'. It will continue to repeat this pattern until it finds the answer "True', which in this example will never happen of course. At some point the computer operator will have to step in and end the routine. The computer is unable to 'understand' that it could compute this problem for ever without reaching a 'True' statement.

The human has understanding, the computer just has programmes and rules.



So does the human brain. Can you decide about something you do not know? you can't.

I have the ability to learn, I will seek out what I want to know from all sorts of places, It doesn't have to be ''Preprogramed''!!!!! So In answer to your question I can you decide about something you do not know.



Can you answer how does it feel to be human?

Wow what a question how long you got:

Somtimes a bit lonely
Exciting
Unique
Gratfull
yadah yadah.........



Happiness is an emotion. Computers don't have emotions. Emotions are only needed when an entity needs to survive.

Yes happiness Is an emotion but so Is the greatest fondation for all human knoledge and achievment today, curiosity.



All your questions are about if the computer feels anything. It's totally irrelevant. You confuse self awareness with emotions.

Do I, It is as I stated before my beliefe that the two are of the same.

(Sorry about the spelling very tired but enjoying the chat none the less!!!)



posted on Jan, 30 2006 @ 06:01 PM
link   

Originally posted by HiddenReality

True AI is impossible and sci-fi rubbish.



Definition of: Luddite

An individual who is against technological change. Luddite comes from Englishman Ned Lud, who rose up against his employer in the late 1700s. Subsequently, "Luddites" emerged in other companies to protest and even destroy new machinery that would put them out of a job. A neo-Luddite is a Luddite in the Internet age.


I only had the text version of this article, so couldn't tell if HiddenReality's picture was there next to the word.



posted on Jan, 30 2006 @ 06:31 PM
link   
Eventually we will have created something that can think, feel pain, and all these other things we can do, then will you say it is impossible? Very few things are impossible, but thats not one of the things. Just because something isnt created by the "god" we think of doesnt mean it can t be created. it is possible to program something to learn store it and pull it back up when it needs it again, isnt that what humans do when we "learn" something? For all we know, we are all just programs. Dont say something is impossible until you can prove it with scientific evidence.

Rekar



posted on Feb, 1 2006 @ 02:46 PM
link   

Originally posted by a Luddite
This thinking is rather pointless, because like i said true AI is impossible


Can you please site your research to this thread. Proving it's impossible is quite a breakthrough. Id really like to see it.

Perhaps computers will first have to transcend the linear 0 and 1 states before true AI can occur. In Quantum computing the qubit exists in the classical 0 and 1 states, and it can also be in a superposition of both. Similar to human brains.

I don't like the mirror anology as a test for True AI. Mine would be like this: If the robot can observe himself observing then it is sentient.




[edit on 052828p://1u25 by Lucid Lunacy]



posted on Feb, 1 2006 @ 06:28 PM
link   

Originally posted by One_Love_One_GOD
If we assume, just for the sake of argument, that all a computer requires to become self-aware is a certain degree of complexity, then just how complex will it need to be? Say for Instance In 20 odd years time will be able to build a computer with 10 million gigs of memory, but can we really expect it to suddenly at this point become self-aware? To anser this question we need to compare the way in which the human brain works to how the computer works, there is more to ths than just the degree of complexity. The main difference is how we solve problems. Computers are programmed not to make any errors, they follow instructions that to a human mind would be ridiculous. If we ask the question 'can the sum of any two consecutive whole numbers be divided by two and the answer result in a whole number?' The human will of course know that the answer is no. The computer on the other hand does not know this and will begin to test this statement. It will start by adding 1 with 2 and dividing the answer by two to get 1.5 and the answer 'False'. It will then move on to 2 + 3 dividing by two and getting 2.5 and the answer 'False'. It will continue to repeat this pattern until it finds the answer "True', which in this example will never happen of course. At some point the computer operator will have to step in and end the routine. The computer is unable to 'understand' that it could compute this problem for ever without reaching a 'True' statement.

The human has understanding, the computer just has programmes and rules.


There is an inherent problem with this logic. We assume that as humans with a basic "understanding" of math that we should know the answer to that question. But it is really our learned experiences and teachings that lead us to that answer. If I give this problem to an autistic adult with only basic understanding of math (addition, subtraction, multiplication, division) then the only way he/she will come to that answer is through trial and error (like a computer). Previous humans have done that trial and error and found that out to be a false statement and why it's a false statement. That is why we can deduce that now ... someone else in history has done the legwork already for us.

The real problem is that computers cannot replicate human EXPERIENCE. For instance, when we say "It's raining cats and dogs" or use some other human idiom a computer has difficulty understanding that it is just raining hard and not literally raining cats and dogs. We know it is not, but how can the computer? Fully successful adult-level A.I. would have to take into account a decade or two of human experience AND be able to understand and assimilate it. Now maybe there will be some ultra-complex algorithm and supercomputer to reasonably model it ... but it will never be exactly the same. Too many quirks of human nature (chemical bonding? love? desire to remain alive?)

When I realized this I was deeply depressed because I hoped that AI would be a true reality in my day.

EDIT - grammar

[edit on 1-2-2006 by Fiverz]



posted on Feb, 1 2006 @ 08:24 PM
link   

Originally posted by Fiverz
When I realized this I was deeply depressed because I hoped that AI would be a true reality in my day.


Well then relish in the fact that many others have realized the potentiality for the opposite. May that brighten your day.



posted on Feb, 1 2006 @ 08:35 PM
link   
we gotta destroy that thing!!! or else they start makin more then they start emotion then they learn betrayal and then they rebel!!!!!!

lifes better off without robots that have emotions. i'm tellin ya those things cant be trusted



posted on Feb, 1 2006 @ 08:57 PM
link   

Originally posted by ArtemisFowl
we gotta destroy that thing!!! or else they start makin more then they start emotion then they learn betrayal and then they rebel!!!!!!

lifes better off without robots that have emotions. i'm tellin ya those things cant be trusted


You care if robots take over but you don't care if we all die?



posted on Feb, 2 2006 @ 02:12 AM
link   

Originally posted by ArtemisFowl
we gotta destroy that thing!!! or else they start makin more then they start emotion then they learn betrayal and then they rebel!!!!!!

lifes better off without robots that have emotions. i'm tellin ya those things cant be trusted


You've seen Terminator too many times methinks. I think a robot without emotions like Empathy are more dangerous. Blade Runner anyone?



posted on Feb, 2 2006 @ 05:02 AM
link   

Originally posted by skyblueff0
i feel so bad for those robots though, what happen when they expires, and found out they don't have a soul. I know when I thought of death when I was younger, it brought life to a halt for a few years, I just pondered about souls, living, dying. It's not going to be different for them, except they really don't have a soul like I do, unless if we give them one..the force!! lol


who is to say it doesnt have a soul, who is to say that we do? maybe the computer right in front of me has a soul. its funny, i treat it well, it doesnt break, it does what i want it to do. i know other people who have similar components in their computer and it breaks all the time! you never know!!



Originally posted by sardion2000
Faster then Sound flight is impossible, or heavier then air flight, or faster then horse/ship communication


also i just wanted to know why (and i've seen it absolutely everywhere) people use the word "then" when you should be using "than"?



posted on Feb, 2 2006 @ 06:13 AM
link   

Originally posted by onlyinmydreams


In what could be a breathtaking development for robotics -- and a new challenge for philosophy -- researchers have announced the development of a robot that can recognize itself in a mirror.... and that can distinguish itself from other identical robots in a mirror.

Considering that the 'mirror test' is used in biology to detect the signs of sentience in animals, this new robot may be, arguably, the first self-aware machine in history:
dsc.discovery.com...

"A new robot can recognize the difference between a mirror image of itself and another robot that looks just like it.
"This so-called mirror image cognition is based on artificial nerve cell groups built into the robot's computer brain that give it the ability to recognize itself and acknowledge others.
"The ground-breaking technology could eventually lead to robots able to express emotions."


OK, so they have shown they can use simulated neural networks to do fancy pattern recognition. But this is just the beginning of a long road.

Emotions, or rather emotional fluctuations are a result of neurochemical variations, which means they need to find a way to integrate meta-functional aspects (such as hormones) which provide reward-punishment training for learning among other things, into their algorithms (assuming these are software emulated neuron groups).

So what is needed is not just associating a pattern with a stored data but a further association that determines the future of the association, a kind of feedback. This way it won't just recognizing itself but will be able to "choose" if it likes to recognize itself, a decision that will be triggered by the meta-effect it has associated with itself through experience (do I feel good or bad about who I am), which if it doesnt might mean we might one day hear about the first Goth robot.

[edit on 2-2-2006 by orca71]



posted on Feb, 2 2006 @ 03:38 PM
link   

Originally posted by HiddenReality
How does this equal self awareness at all? All it is doing is its function which it is designed to do, it doesnt think or act on its own impulses... I could set my webcam up to recognize me sitting in front of it, does that mean my camera knows me?

True AI is impossible and sci-fi rubbish.


Just because something is not possible YET, doesn't mean it will remain impossible in the future. We discover new things about our world/universe every single day. What we didn't understand a year ago, we understand now. What was considered "impossible" 100 years ago, is quite possible today.

As for watching too much TV/reading too many sci-fi books,.... well,...a lot of fiction of our past has become reality in our present. Fiction writers often inspire people/scientists to try to pursue inventing certain fictional objects/tools/weapons, and making them functional. The imagination is a VERY important part of creation, and I believe that a lot of Sci-fi fuels that desire to create. Didn't Albert Einstein say something to the effect of; "Imagination is more important than knowledge"?

Scientists seem to have made a step toward true AI, but they do not claim they've reached it; "the robot represents a big step toward developing self-aware robots and in understanding and modeling human self-consciousness."

Though you may be sceptical, please don't claim it is impossible unless you present irrefutable evidence to support that claim.

[edit on 2-2-2006 by 2manyquestions]



posted on Feb, 2 2006 @ 03:56 PM
link   


also i just wanted to know why (and i've seen it absolutely everywhere) people use the word "then" when you should be using "than"?


Groan. Not another Spelling/Grammer nazi
The e is in a more convienient place how bout that
Jes ignore it as you see, I'm just lazy after having to spell/grammatitize corectely all day at work/school and ATS is a refuge where people usually don't call people out on that kinda stuff.



posted on Feb, 2 2006 @ 10:06 PM
link   

Originally posted by superduperman


who is to say it doesnt have a soul, who is to say that we do? maybe the computer right in front of me has a soul. its funny, i treat it well, it doesnt break, it does what i want it to do. i know other people who have similar components in their computer and it breaks all the time! you never know!!



Originally posted by sardion2000
Faster then Sound flight is impossible, or heavier then air flight, or faster then horse/ship communication


also i just wanted to know why (and i've seen it absolutely everywhere) people use the word "then" when you should be using "than"?


I've always wanted to know why (and I've seen it absolutely everywhere) people use the word "its" when you should be using "it's"

'Tis ok Sardion2000, I got yer back :p



posted on Feb, 3 2006 @ 05:27 AM
link   

Originally posted by sardion2000
Groan. Not another Spelling/Grammer nazi
The e is in a more convienient place how bout that
Jes ignore it as you see, I'm just lazy after having to spell/grammatitize corectely all day at work/school and ATS is a refuge where people usually don't call people out on that kinda stuff.


alright chill out man, just wondering!! so you dont actually say "then" instead of "than", whet if oi sterted toiping loik thas, it werdn't ba vary noice to raed weuld it?



posted on Feb, 3 2006 @ 07:49 AM
link   

Originally posted by superduperman
alright chill out man, just wondering!! so you dont actually say "then" instead of "than", whet if oi sterted toiping loik thas, it werdn't ba vary noice to raed weuld it?


I'm not following...

You continue to make grammatical errors yourself. What was your original point? Better yet, how about you just drop this and get back on topic.

[edit on 072828p://3u41 by Lucid Lunacy]



posted on Feb, 3 2006 @ 01:31 PM
link   
Whatever our conciseness is, it’s not magic. A machine one day will pass the Touring Test and these machines will surpass human intelligence. Will this machine have the richness of conciseness of humans do? Maybe, maybe not, maybe more, but that is not to say they won’t represent intelligence.



posted on Feb, 3 2006 @ 04:05 PM
link   
Well it's kinda hard to predict what intelligence will be like post-singularity but I don't think the machines will surpass human intellect per se. I think man and machine will merge. Therefore mankind will surpass itself.



new topics

top topics



 
0
<< 1    3 >>

log in

join