It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
say you have a human brain (or person) and a computer from 50 years from now, and you place them in a box and you start asking them questions. And I don't know which one is answering, if I can't tell the difference between a human being answering questions, and a computer answering questions, then qualitatively they are equivalent. And if I believe the human is conscious and self aware, I must also believe that the machine has the same qualities
I think the flaw in that question is the same flaw as in the video. It's not just about raw computing power.
Originally posted by ThaLoccster
So my question is does this man's idea hold up? If computing power does continue to grow the way its predicted to, could computer technology come to a point where the computer has learned, and believes itself to be a living conscious being?
AI began as an attempt to answer some of the most fundamental questions about human existence by understanding the nature of intelligence, but it has grown into a scientific and technological field affecting many aspects of commerce and society.
Even as AI technology becomes integrated into the fabric of everyday life, AI researchers remain focused on the grand challenges of automating intelligence. Work is progressing on developing systems that converse in natural language, that perceive and respond to their surroundings, and that encode and provide useful access to all of human knowledge and expertise. The pursuit of the ultimate goals of AI -- the design of intelligent artifacts; understanding of human intelligence; abstract understanding of intelligence (possibly superhuman) -- continues to have practical consequences in the form of new industries, enhanced functionality for existing systems, increased productivity in general, and improvements in the quality of life. But the ultimate promises of AI are still decades away, and the necessary advances in knowledge and technology will require a sustained fundamental research effort.
Supplementary Info and Web Resources
I've been studying AI since the 1970s. After working in the field for a quarter of a century, I became interested in the question of whether, if we really did manage to succeed, but built a machine that only thought in a goal-directed, rational way, wouldn't we have just succeeded in building a (possibly superhuman) psychopath? -- and would this really be such a smart thing to do?
This book is the result of my investigations. It is first and foremost an attempt to give you, the reader, a solid foundation for understanding AI in the first place -- how far it has come, what it can do, how likely it is to produce the kind of super-intelligent robot minds we might reasonably worry about. Then I talk about what what we actually know about human consciences and the brand-new AI subfield of machine ethics. And finally I take my best shot as predicting what AI will mean for the human condition over the coming decades.
In fact, as I did the research and a lot of thinking in the course of writing the book, I came away with a different understanding of the question than I had started with, a somewhat more optimistic one.
I wanted to write a slightly more technical book, and my editor at Prometheus wanted a somewhat more popular book. The result is a book which is accessible but challenging to the intelligent general reader. It couldn't be aimed at experts -- there are no experts in the field yet, really, and the book covers too much ground, from cybernetics to moral philosophy.
David Brin wrote:The issue is not whether we will make new creatures who are smarter than we are. Humans have done that for ages. BEYOND AI explores whether our new cybernetic offspring can be taught loyalty and goodness, the way other children have been. When it comes to machine intelligence, J. Storrs Hall asks: "Are we smart enough to be good ancestors?"