It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Eugene Goostman, a computer programme pretending to be a young Ukrainian boy, successfully duped enough humans to pass the iconic test.
A programme that convinced humans that it was a 13-year-old boy has become the first computer ever to pass the Turing Test. The test — which requires that computers are indistinguishable from humans — is considered a landmark in the development of artificial intelligence, but academics have warned that the technology could be used for cybercrime.
Computing pioneer Alan Turing said that a computer could be understood to be thinking if it passed the test, which requires that a computer dupes 30 per cent of human interrogators in five-minute text conversations.
Eugene Goostman, a computer programme made by a team based in Russia, succeeded in a test conducted at the Royal Society in London. It convinced 33 per cent of the judges that it was human, said academics at the University of Reading, which organised the test.
It is thought to be the first computer to pass the iconic test. Though other programmes have claimed successes, those included set topics or questions in advance.
A version of the computer programme, which was created in 2001, is hosted online for anyone talk to. (“I feel about beating the turing test in quite convenient way. Nothing original,” said Goostman, when asked how he felt after his success.)
The computer programme claims to be a 13-year-old boy from Odessa in Ukraine.
Kevin Warwick, a visiting professor at the University of Reading, which organized both tests, said it was the first time a chatbot had passed an open-ended test, rather than one where topics or questions were set in advance.
New Scientist: Do you think you have passed the Turing test?
Cleverbot: What is the Turing test?
New Scientist: It's a test to determine if a machine is intelligent.
Cleverbot: Have you taken the test?
New Scientist: No, but you have – do you think you passed?
Cleverbot: I am, but I'm too lazy to do anything with it.
originally posted by: GetHyped
The Turing Test really isn't any indication of hard AI and not taken seriously as a test of AI within the CS community. CleverBot passed the Turing Test for pete's sake.
originally posted by: neoholographic
I would like to see the protocols for the Cleverbot test if you have them. I'm looking on line for them now.
Cleverbot participated in a formal Turing Test at the 2011 Techniche festival at the Indian Institute of Technology Guwahati on September 3, 2011. Out of the 334 votes cast, Cleverbot was judged to be 59.3% human, compared to the rating of 63.3% human achieved by human participants. A score of 50.05% or higher is often considered to be a passing grade.[5] The software running for the event had to handle just 1 or 2 simultaneous requests, whereas online Cleverbot is usually talking to around 10,000 people at once.
Turing predicted that machines would eventually be able to pass the test; in fact, he estimated that by the year 2000, machines with 10 GB of storage would be able to fool 30% of human judges in a five-minute test, and that people would no longer consider the phrase "thinking machine" contradictory.[3] (In practice, from 2009-2012, the Loebner Prize chatterbot contestants only managed to fool a judge once,[88] and that was only due to the human contestant pretending to be a chatbot.[89]) He further predicted that machine learning would be an important part of building powerful machines, a claim considered plausible by contemporary researchers in artificial intelligence.[44]
originally posted by: GetHyped
a reply to: Mr Mask
Turing didn't foresee the crappy pattern matching through accessing a pre-populated database AI of chat bots which can easily pass the Turing test under the right circumstances. This is not the sort of AI what Turing envisioned when he devised his thought experiment. Cleverbot passed with a score of 60% but that's not hard AI.
Kevin Warwick, a visiting professor at the University of Reading, which organized both tests, said it was the first time a chatbot had passed an open-ended test, rather than one where topics or questions were set in advance.Kevin Warwick, a visiting professor at the University of Reading, which organized both tests, said it was the first time a chatbot had passed an open-ended test, rather than one where topics or questions were set in advance.
Mainstream AI researchers argue that trying to pass the Turing Test is merely a distraction from more fruitful research.[41] Indeed, the Turing test is not an active focus of much academic or commercial effort—as Stuart Russell and Peter Norvig write: "AI researchers have devoted little attention to passing the Turing test."[73] There are several reasons.
First, there are easier ways to test their programs. Most current research in AI-related fields is aimed at modest and specific goals, such as automated scheduling, object recognition, or logistics. In order to test the intelligence of the programs that solve these problems, AI researchers simply give them the task directly, rather than going through the roundabout method of posing the question in a chat room populated with computers and people.
Second, creating lifelike simulations of human beings is a difficult problem on its own that does not need to be solved to achieve the basic goals of AI research. Believable human characters may be interesting in a work of art, a game, or a sophisticated user interface, but they are not part of the science of creating intelligent machines, that is, machines that solve problems using intelligence. Russell and Norvig suggest an analogy with the history of flight: Planes are tested by how well they fly, not by comparing them to birds. "Aeronautical engineering texts," they write, "do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons.'"[73]
Turing, for his part, never intended his test to be used as a practical, day-to-day measure of the intelligence of AI programs; he wanted to provide a clear and understandable example to aid in the discussion of the philosophy of artificial intelligence.[74] John McCarthy observes that the philosophy of AI is "unlikely to have any more effect on the practice of AI research than philosophy of science generally has on the practice of science."[75]
And no program has passed that test until now.
A high-powered version of Cleverbot took part alongside humans in a formal Turing Test at the Techniche 2011 festival. The results from 1,334 votes were astonishing...
Cleverbot was judged to be 59.3% human.
The humans in the event achieved just 63.3%.
"It's higher than even I was expecting, or even hoping for. The figures exceeded 50%, and you could say that's a pass. But 59% is not quite 63%, so there is still a difference between human and machine." Rollo Carpenter
www.cleverbot.com...
originally posted by: ChaoticOrder
a reply to: Mr Mask
I'm fairly sure Cleverbot has passed the turing test..
A high-powered version of Cleverbot took part alongside humans in a formal Turing Test at the Techniche 2011 festival. The results from 1,334 votes were astonishing...
Cleverbot was judged to be 59.3% human.
The humans in the event achieved just 63.3%.
"It's higher than even I was expecting, or even hoping for. The figures exceeded 50%, and you could say that's a pass. But 59% is not quite 63%, so there is still a difference between human and machine." Rollo Carpenter
www.cleverbot.com...