It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: DexterRiley
a reply to: neoholographic
machine intelligence doesn't have to be exactly like human intelligence in order for something like Skynet to occur.
But what would drive the AI to create Skynet? Why would it feel the need to destroy humanity?
That would imply that the AI has acquired, or has been programmed with, a self-preservation instinct. I suppose it is possible that once the system has developed some sense of independence it could come to the conclusion that humans are a threat to its existence. It's the timing of that seminal event, when the AI has achieved self-awareness, that is currently the question.
Simply put, A. I. could fear humans will try to shut it down, it could see humans as a destructive force or it could get along with humans perfectly well.
I guess putting safeguards in place makes sense. I supposed there's no sense in waiting to the last minute. Some AI researchers do believe that the emergence of AI self-awareness is imminent. Even if they are in the minority, the possibility of a catastrophic outcome should motivate that community to at least open a dialog about it.
The point Musk is making is that there's no need to be stupid when we can talk about these things now and maybe put some safeguards in place.
They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.
Well, Elon Musk has been quoted:
Why does everyone have to be fear mongering when they're simply saying we need to talk about these things because the space is advancing rapidly.
Elon Musk: Artificial intelligence may be "more dangerous than nukes"
Elon Musk: ‘With artificial intelligence we are summoning the demon.’
Elon Musk's deleted message: Five years until 'dangerous' AI
I don't doubt that Elon Musk has inside information about the current state of the technology of AI. However, what was once considered AI is rather widely deployed in our society now. Google's purchase of Deep Mind is an indication that they see a great future for AI. The Google driverless car is one example of a potential financial windfall for the corporation. That's a great example of specialized AI. However, General Artificial Intelligence, or Strong AI, is exponentially more complex.
Like he said, he was an early investor in Deep Mind the company acquired by Google along with other companies. Google didn't buy Deep Mind for 400 million and other A.I. companies because we're 200 years away. It bought Deep Mind for 400 million and it had no commercial products to it's name. This tells you the technology is something that has Musk worried and he's just saying let's ask some questions.
“I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.”
originally posted by: DexterRiley
a reply to: mbkennel
I still call general human-level AI as 200 years away.
When mankind does reach that point in 200+ years, do you think Elon Musk's fears will then materialize?
DeepMind Technologies is a British artificial intelligence company. It was acquired by Google in 2014.
The company's latest achievement is the creation of a neural network that learns how to play video games in a similar fashion to humans.