It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Artificial intelligence has the potential to be as dangerous to mankind as nuclear weapons, a leading pioneer of the technology has claimed.
Professor Stuart Russell, a computer scientist who has lead research on artificial intelligence, fears humanity might be 'driving off a cliff' with the rapid development of AI.
He fears the technology could too easily be exploited for use by the military in weapons, putting them under the control of AI systems.
In an editorial in Science, editors Jelena Stajic, Richard Stone, Gilbert Chin and Brad Wible, said: 'Triumphs in the field of AI are bringing to the fore questions that, until recently, seemed better left to science fiction than to science.
'How will we ensure that the rise of the machines is entirely under human control? And what will the world be like if truly intelligent computers come to coexist with humankind?
The development of full artificial intelligence could spell the end of the human race...It would take off on its own, and re-design itself at an ever increasing rate
By Tim Urban
Note: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that what’s happening in the world of AI is not just an important topic, but by far THE most important topic for our future. So I wanted to learn as much as I could about it, and once I did that, I wanted to make sure I wrote a post that really explained this whole situation and why it matters so much. Not shockingly, that became outrageously long, so I broke it into two parts. This is Part 1—Part 2 is here.
originally posted by: mapsurfer_
Is this not what Jade Helm is about? They are already testing AI in the battlefield scene which is pretty scary stuff considering a rogue AI system could escalate war to a whole new level. Think about a large corporation like Sony developing weapons controlled by gamers, and where exactly do you draw the line with drone technology. Apparently it knows no bounds. This has the potential to be the most lethal threat on the planet.
originally posted by: neoholographic
It ends with this:
"'How will we ensure that the rise of the machines is entirely under human control? "
originally posted by: bigfatfurrytexan
a reply to: 11andrew34
"Eventually" would be an hour, maybe two. An hour, maybe two to become a God on Earth.
An AI system at a certain level—let’s say human village idiot—is programmed with the goal of improving its own intelligence. Once it does, it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion, 11 and it’s the ultimate example of The Law of Accelerating Returns.
Your words are well chosen in my opinion.
It's the most realistic, well-written, easy to understand piece I've seen on the ramifications of smarter-than-human artificial intelligence...