It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


Thoughts on the Turing test for AI

page: 1

log in


posted on Apr, 21 2012 @ 11:38 AM
I posted a new thread on some of Alan's papers on code breaking being released and it reminded me of something I've been thinking on for awhile dealing with Turing in the computer field.

My thoughts are on the Turing test for AI. I think the Turing test is not an accurate way to determine AI.

The reasons behind these thoughts is that an AI would not turn out like a person (using person in a broad sense to mean human like thought patterns.) Because the AI would not be exposed to the environmental and nurturing factors of a human child, Thus it's thought processes would not grow in the same way as a humans. It would end up developing it's own unique processes. Of course this could be counter acted by programming the AI with false memories of growing up as a human. but then would it truly be an AI or just a simulation of a human?

I am hoping others can expand on this, providing points I may have missed both for or against this hypothesis. And I am looking forward to a nice discussion of the subject.
edit on 21/4/2012 by ArMaP because: added a missing 'T' on the title

posted on Apr, 22 2012 @ 04:14 PM
Advances in robotics technology have led a number of researchers pioneering AI and Cognitive theories to question the idea that intelligence is dependent upon the brain. Distributed process networks and robotics models have demonstrated that complex, coordinated behavior naturally arises from the 'simple' peripheral functions of the intelligence.

This is further demonstrated in research with how the brain handles motor instructions. The brain doesn't monitor and command each and every muscle - it sends short-hand instructions to the motor neurons that then process the signal, utilizing local resources/information to carry out the function. This is what is known as muscle-memory.

Further observation reveals that the manner in which the brain functions and allocates its resources is largely related to the physical habits and/or afflictions of the individual. A person who loses sight will reallocate those neurons to other functions.

This has given rise to the idea that what it means to be and act as a human is largely determined by our physical makeup.

Which, when you think about it - a lot of who we are is very closely tied with our anatomy. In language, we use phrases like "bend over backwards" (in various languages) almost universally to express difficulty and making one's self vulnerable. The noises we make to mean it might be different - but the concept is the same; it is universally difficult and uncomfortable to bend over backwards for humans. A creature that didn't have the difficulty would not immediately relate.

We scream when surprised or in danger in order to open the air way enough so shock to our chest/abdomen can be cushioned by the lungs (and it probably has the social benefit of letting others know #'s going down). We are very sensitive about our eyes as we are very reliant upon our vision (we have some of the best vision in our neck of the animal world).

So, to that end... if you were to take a human brain and put it in a box with completely different methods of interacting with the world (perhaps not even interacting with the real world at all).... for how long will it remain identifiable as a human intelligence?

posted on Apr, 22 2012 @ 04:42 PM
I am also of the assumption that given the modern day advances in computer tech the Turing test is NOT a good test for AI.

It was a great concept at the time, but true artificial intelligence would need a better test.

I believe that "Watson" would likely be able to beat the Turing test the way it's setup now.

new topics

log in