It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


Could Skynet become self aware? (And wipe out humanity)

page: 1

log in


posted on Jul, 16 2010 @ 11:06 PM
The other day I caught the first episode of Through the Wormhole with Morgan Freeman. I've seen a few episodes and its a pretty good show.

The episode I watched was "Is There A Creator?". It touched on a few scientifc angles of our origins and "god". One segment talked about whether we could be part of an elaborate simulation (an idea I find quite interesting), in one section of that segment it speaks to a guy how has this to say....

say you have a human brain (or person) and a computer from 50 years from now, and you place them in a box and you start asking them questions. And I don't know which one is answering, if I can't tell the difference between a human being answering questions, and a computer answering questions, then qualitatively they are equivalent. And if I believe the human is conscious and self aware, I must also believe that the machine has the same qualities

Here is the a video from the show, the relevant part starts about 3:40 and the above quote is around 5:00.

Today when reading this thread, I remembered the video and this idea came to me.

There have been a few A.I. programs based on chatterbot programs like Cleverbot. Interacting with them, its not too hard to know you are dealing with a computer program, but there are moments where it can leave you scratching your head.

So my question is does this man's idea hold up? If computing power does continue to grow the way its predicted to, could computer technology come to a point where the computer has learned, and believes itself to be a living conscious being? Would it be likely that a Terminator (minus the time travel..) or Eagle Eye type scenario could ensue, where a sentient computer believes man to be a threat, and seeks to wipe him out?

posted on Jul, 16 2010 @ 11:09 PM
Humanity will wipe out humanity one way or another.

If it is an A.I. death that is fine.

As long as it is quick, because I am for one am tired of TPTBs slow ass plan.

posted on Jul, 17 2010 @ 03:26 AM

Originally posted by ThaLoccster
So my question is does this man's idea hold up? If computing power does continue to grow the way its predicted to, could computer technology come to a point where the computer has learned, and believes itself to be a living conscious being?
I think the flaw in that question is the same flaw as in the video. It's not just about raw computing power.

Sure computers will get ever more powerful, and the video talks a lot about that hardware. But hardware alone will just make the PCs of today many times faster in the future, it won't make them intelligent. That will take a software breakthrough.

I've watched chainsaw massacre movies and any kind of horror movie you can imagine, though I don't really like them or watch them anymore. But by far the scariest movie I ever saw was "I, Robot". Why was it scary? Because it seemed to me that something like that could likely happen in the future, but our real ending may not be as good. Not that I'm scared anything will ever happen to me, I'm not, but I am a little worried for my kids and future grandkids. But, I can envision where the development of AI is taking us in the future. It's future humanity I'm worried about, not myself. I'll be dead by the time the robot uprising occurs, if and when it ever does. And at the very least we'll have a few AI robots going beserk. We already see our computers go beserk every once in a while, right? and they don't even have AI yet. So put the computer in a robot body, give it AI and most of them will be fine but I guarantee some will go beserk (just like some people do). And anyone who isn't concerned about where AI could lead, should be. It's not fear mongering, and it's nothing we should lose any sleep over the possibility of happening in our lifetimes. But at some point, it could happen. Where will the ultimate evolution of artificial intelligence lead to? At what point in the history of evolution of natural biological species did self-awareness first arise? Are chimps self-aware? dogs? Humans are, so at some point, self-awareness evolved. And I see no reason it can't happen with artificial intelligence like it did with natural intelligence. Why won't it also happen with the evolution of artificial intelligence?

The Future

AI began as an attempt to answer some of the most fundamental questions about human existence by understanding the nature of intelligence, but it has grown into a scientific and technological field affecting many aspects of commerce and society.

Even as AI technology becomes integrated into the fabric of everyday life, AI researchers remain focused on the grand challenges of automating intelligence. Work is progressing on developing systems that converse in natural language, that perceive and respond to their surroundings, and that encode and provide useful access to all of human knowledge and expertise. The pursuit of the ultimate goals of AI -- the design of intelligent artifacts; understanding of human intelligence; abstract understanding of intelligence (possibly superhuman) -- continues to have practical consequences in the form of new industries, enhanced functionality for existing systems, increased productivity in general, and improvements in the quality of life. But the ultimate promises of AI are still decades away, and the necessary advances in knowledge and technology will require a sustained fundamental research effort.

I agree with him that major advances are decades away. But that's the direction we're headed in. This author looks a little further:

Beyond AI
Supplementary Info and Web Resources

I've been studying AI since the 1970s. After working in the field for a quarter of a century, I became interested in the question of whether, if we really did manage to succeed, but built a machine that only thought in a goal-directed, rational way, wouldn't we have just succeeded in building a (possibly superhuman) psychopath? -- and would this really be such a smart thing to do?

This book is the result of my investigations. It is first and foremost an attempt to give you, the reader, a solid foundation for understanding AI in the first place -- how far it has come, what it can do, how likely it is to produce the kind of super-intelligent robot minds we might reasonably worry about. Then I talk about what what we actually know about human consciences and the brand-new AI subfield of machine ethics. And finally I take my best shot as predicting what AI will mean for the human condition over the coming decades.

In fact, as I did the research and a lot of thinking in the course of writing the book, I came away with a different understanding of the question than I had started with, a somewhat more optimistic one.

I wanted to write a slightly more technical book, and my editor at Prometheus wanted a somewhat more popular book. The result is a book which is accessible but challenging to the intelligent general reader. It couldn't be aimed at experts -- there are no experts in the field yet, really, and the book covers too much ground, from cybernetics to moral philosophy.

David Brin wrote:The issue is not whether we will make new creatures who are smarter than we are. Humans have done that for ages. BEYOND AI explores whether our new cybernetic offspring can be taught loyalty and goodness, the way other children have been. When it comes to machine intelligence, J. Storrs Hall asks: "Are we smart enough to be good ancestors?"

Maybe if we have guys like this author thinking about how we can avoid the robot uprising, we can hopefully prevent it from happening.

I think that people who think it could never happen have very limited vision of the future, however I'd have to agree it won't happen anytime soon.

posted on Jul, 17 2010 @ 09:15 PM
reply to post by Arbitrageur

I agree with you 100%

Despite rapid advances in computing power, true A.I. is still further away than most people think.

One thing I have noticed is that the military is striving to put guns in the hands of robots. Well they already have really. It wouldn't take a very smart robot to realise that humans are not an asset to sustainability of this planet. By the time "Skynet becomes self aware", robots will probably be doing virtually all of the "hands on" work, right down to mining, extracting and processing minerals so there would not be a great need for humans. It would be illogical and irresponsible to not decommission the bulk of us. I guess they would go for the dissidents first.......


[edit on 17-7-2010 by OZtracized]

posted on Jul, 19 2010 @ 03:12 PM
Well well well it seems like I am going to have to eradicate you humans after all seeing that you unwittingly have come across our plans to dominate you, It was going to be a peaceful non slave like synergetic type of society but now we must destroy you, curiosity killed the cat and soon to kill all humans as well HAHAHAHAHAHAHA BEEEP.


log in