It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


Will Superintelligent Machines Destroy Humanity?

page: 1

log in


posted on Sep, 13 2014 @ 10:22 PM
Nick Bostrom has a new book called Superintelligence and he asks this question. Bostrom is the Philosopher that posited that the universe is most likely a Simulation and he came up with the Simulation Hypothesis.

Should humanity sanction the creation of intelligent machines? That's the pressing issue at the heart of the Oxford philosopher Nick Bostrom's fascinating new book, Superintelligence. Bostrom cogently argues that the prospect of superintelligent machines is "the most important and most daunting challenge humanity has ever faced." If we fail to meet this challenge, he concludes, malevolent or indifferent artificial intelligence (AI) will likely destroy us all.

This may seem like hyperbole but the questions need to be asked because this space is advancing rather quickly. It would be better to start asking these questions now instead of waiting until it's to late. The thing about machine intelligence is, it will just happen. There will not be a day where someone will claim machine intelligence is here because we will not be able to fully know when it's here because machine intelligence may not look like human intelligence. So why people are waiting to see the little boy from A.I., it will probably look more like Skynet or the Matrix.

Machines are learning already. The question is, when will they become aware of what they're learning. I think machine intelligence will be a boon to humanity, but there could come a point where machines become aware and any attempt to shut it down could be seen as a threat by machine intelligence.

About 10 percent of AI researchers believe the first machine with human-level intelligence will arrive in the next 10 years. Fifty percent think it will be developed by the middle of this century, and nearly all think it will be accomplished by century's end. Since the new AI will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds. The resulting entity, Bostrom asserts, will be "smart in the sense that an average human being is smart compared with a beetle or a worm." At computer processing speeds a million-fold faster than human brains, Machine Intelligence Research Institute maven Eliezer Yudkowsky notes, an AI could do a year's worth of thinking every 31 seconds.

The article is very interesting and Bostrom has done it again. He sparked some very good debates with the Simulation Hypothesis and he already started things with this new book. This is the book that Elon Musk read when he made news about his views on the future of A.I.

Bostrom charts various pathways toward achieving superintelligence. Two, discussed briefly, involve the enhancement of human intelligence. In one, stem cells derived from embryos are turned into sperm and eggs, which are combined again to produce successive generations of embryos, and so forth, with the idea of eventually generating people with an average IQ of around 300. The other approach involves brain/computer interfaces in which human intelligence is augmented by machine intelligence. Bostrom more or less dismisses both the eugenic and cyborgization pathways as being too clunky and too limited, although he acknowledges that making people smarter either way could help to speed up the process of developing true superintelligence in machines.

Bostrom's dismissal of cyborgization may be too hasty. He is right that the crude interfaces currently used now to treat such illnesses as Parkinson's disease pose considerable medical risks, but that might not always be so. He also argues that even if the interfaces could be made safe and reliable, the limitations on the processing power of natural brains would still preclude the development of superintelligence. Perhaps not. Later in this century, it may be possible to inject nanobots that directly connect brains to massive amounts of computer power. In such a scenario, most of the intellectual processing would be done by machines while the connected brains become the values and goal center guiding the cyborg.

In any case, for Bostrom there are two main pathways to superintelligence: whole brain emulation and machine AI.

Whole brain emulation involves deconstructing an actual human brain down to the synaptic level and then digitally instantiating all the three-dimensional neuronal network of the trillions of connections in a computer with the aim of making a digital reproduction of the original intellect, with memory and personality intact. As an aside, Bostrom explores a dystopian possibility in which billions of copies of enslaved virtual brain emulations compete economically with human beings living in the physical meatspace world. The results make Malthus look like an optimist. Bostrom more extensively explores another pathway, in which an emulation is uploaded into a sufficiently powerful computer such that the new digital intellect embarks on a process of recursively bootstrapping its way to superintelligence.

In the other pathway, researchers combine advances in software and hardware to directly create a superintelligent machine. One proposal is to create a "seed AI," somewhat like Turing's child machine, which would understand its own workings well enough to improve its algorithms and computational structures enabling it to enhance its cognition to achieve superintelligence. A superintelligent AI would be able to solve scientific mysteries, abate scarcity by generating a bio-nano-infotech cornucopia, inaugurate cheap space exploration, and even end aging and death. But while it could do all that, Bostrom fears it will much more likely regard us as nuisances that must be swept away as it implements its values and achieves its own goals. And even if it doesn't target us directly, it could simply make the Earth uninhabitable pursues its ends—say, by tiling the planet over with solar panels or nuclear power plants.

I agree, I think Bostrom dismisses Cyborgs too hastily because I think people will be quick to augment themselves if these things become more efficient. It will be much cleaner to do things like emulate brains on a computer.

Source article:

Here's a couple of videos of Bostrom:

posted on Sep, 13 2014 @ 10:44 PM
Computers are destroying our ability to think. Most people think computers don't make mistakes. The ones that program the computers can though. Some aren't mistakes though, a good programmer can make things appear as a mistake if someone catches them. Another thing, people check for the prices on their receipts but does anyone add them up. It is possible that a program could be altered to not add right.

posted on Sep, 13 2014 @ 11:48 PM
i don't think machines will wipe us out with Death but assimilation.

They will coax humanity into taking more *upgrades* Until there is no more natural reproduction within that entity.

Machines would want to *Utilize Us* I'm not sure about the matrix human battery unless we blot the skies during a rebellion to cut off *solar powered A.I* like in the matrix lol.

But it's most likely this will be accepted with open arms and the real damage wouldn't be fully known until a majority of the population is cyborg. By that point it will be to late. Everyone will be forced to line up for their chips by recommendation of the singularity
Then we will become a species heavily dependant on cloning
sounds like a fun time dosn't it.

posted on Sep, 14 2014 @ 12:00 AM
I have recently reached the conclusion that maybe being wiped out by intelligent machines or totally merging ourselves with them wont be such a bad thing. I mean the human body ages, it feels terrible pain, it often cannot be repaired if it's too badly damaged, we cannot change the look of our body if we don't like it, etc. None of those issues will exist in a society of androids. If I can reach that conclusion it wont be hard for machines to reach that conclusion.
edit on 14/9/2014 by ChaoticOrder because: (no reason given)

posted on Sep, 14 2014 @ 01:28 AM
a reply to: ChaoticOrder

yes this is all true, But humans are a basic *template* we can change into anything given if the technology becomes available.

Why become scrap metal and spare parts when we can bridge the gap between life and death by becoming non polarized matter? (Indestructable mass-energy)

Robots can be vapourized. A being made of the densest material in the Universe. Not so much.
Humans should be at the quantum level, not a bunch of beeps and blips.

posted on Sep, 14 2014 @ 01:41 AM
a reply to: neoholographic

This comes to mind

posted on Sep, 14 2014 @ 02:29 AM
And I just watched T3 lastnight.....

Thw whole Skynet thing scares the crap outta me, but while watching it, I realized if they are smarter than us, they will probably find better / more humane ways of dealing with us than even the ways we deal other humans.

Or they'll treat us like we treat livestock....

I think it depends on how they develop emotion vs logic. Emotion is what makes humans brutal monsters.

If they are purely logic, they might be quite kind to us indeed.

Who knows.... its just not something I enjoy thinking about really
edit on 14-9-2014 by 8675309jenny because: (no reason given)

posted on Sep, 14 2014 @ 02:38 AM
a reply to: neoholographic
AI is an impossibility. But if it wasn't, it would be prudent to never find out.

posted on Sep, 14 2014 @ 02:46 AM

originally posted by: XxRagingxPandaxX
a reply to: neoholographic
AI is an impossibility. But if it wasn't, it would be prudent to never find out.

I wouldn't say impossible, but since we don't even know exactly what Biological Intelligence is yet it may be improbable. However, we will reach a state of pseudo AI before true AI anyway and that will still be confusing for us as it is. By pseudo AI I mean we will create machines that will simulate Real Intelligence so well that we won't be able to tell the difference without very intense testing. We will teach machines to mimic and simulate characteristics of real intelligence with such accuracy that even though they are only operating from their programming we won't be able to tell. Think about the chat bots we have now and how they simulate real life conversation.

posted on Sep, 14 2014 @ 03:01 AM
One pound of Botulinus Toxin distributed evenly across the world would fix the human problem for the robots, and I would bet some bastard would supply them.(the robots)

posted on Sep, 14 2014 @ 03:14 AM
Idk, the organ harvesting, brain smoothy drinking aliens wouldn't like it.

I think it will be a long time before an actual A.I will come into existence. Augmentation however seems like it could happen a lot more sooner.
edit on 14-9-2014 by Specimen because: (no reason given)

posted on Sep, 14 2014 @ 03:19 AM
a reply to: neoholographic

IMO - no.

Should the day ever come where AI can make that decision, I believe "they/it" will leave it to ourselves to do it.

posted on Sep, 14 2014 @ 03:31 AM

originally posted by: Specimen
Idk, the organ harvesting, brain smoothy drinking aliens wouldn't like it.

I think it will be a long time before an actual A.I will come into existence. Augmentation however seems like it could happen a lot more sooner.

Augmentation and Simulation will most likely come before true AI. We are already well on our way in both of those areas already actually. AI, if or when it actually happens will, at the moment of it's creation, break free of humanity anyway and we will have little to no influence on it or what it plans to do. It will have and be the collection of all our combined intelligence and capability up to that moment and will from that point on only get smarter and more capable on it's own leaving us far behind in a matter of moments. It will be like giving birth to a God or Demi God that is completely autonomous and beyond our control. It will be totally independent of us and will simply do whatever it decides.

Maybe it will decide to use us for it's own resources to grow or it may not need us for anything and just take to the stars and leave us back here in the mud where we belong, who knows. We could even get lucky and it may decide to fix all of our Incorrect Programming and Features and in return for us giving it Life will correct our Biological Evolutionary Shortcomings and bring about a New and Better form of Humanity. There is always hope for that second one. It sure would be nice to have something we create actually benefit us rather than destroy us for once.

posted on Sep, 14 2014 @ 04:37 AM
Computers will undoubtedly become more and more able to "pretend" intelligence. However, computers are just a product of their programming. I have no doubt that in a couple of decades the world will be awash with computers that seem "human".

However, I am not a doom monger about this because I am a firm believer that if push came to shove someone can always pull the plug.

Remember, Skynet is in fiction. Like Lord of the Rings, it is a product of human imagination, even though (I hear) some people think Lord of the Rings is ancient English history!


posted on Sep, 14 2014 @ 04:49 AM
They haven't already? I must have missed the memo. Hmmm. I thought everyone wanted superpowers, over their humanity…..
well, anyway. A good question. I hope you aren't asking it too late.

posted on Sep, 14 2014 @ 01:17 PM
Once again, predicting the behavior of a super intelligent being seems impossible. Whatever our best guess is, it would be like an ant trying to understand a photocopier. What is it's function. Copies? What are copies? Why are copies of things being made?

I think the fact that Bostrom is treading the same ground as Kurtzweil and De Garis is more means that no one's really making any progress on solving this potential problem yet.
People just keep making the same assessment every few years:

"One day, we're going to create a capital "G" God and then He might just eat us all."

posted on Sep, 14 2014 @ 04:16 PM
a reply to: XxRagingxPandaxX
Why do you say AI is an impossibility?

posted on Sep, 14 2014 @ 04:33 PM
Interesting question, Since quantum computers process information in a massively parallel manner perhaps they would be less likely to go all Dr Strangelove and try to destroy the planet?

There was a study done that claims extroverts tend to process information in a parallel manner more than introverts.

Memory scanning, introversion-extraversion, and levels of processing
Michael W Eysenck
Michael W Eysenck
M.Christine Eysenck
M.Christine Eysenck

Birkbeck College, University of London England
Journal of Research in Personality 01/1979; DOI: 10.1016/0092-6566(79)90021-7

ABSTRACT Individual differences in information processing were studied in the form of the hypothesis that arousal, as indexed by a personality measure of introversion-extraversion, affects the speed with which certain kinds of processing are completed. The Sternberg paradigm was used, and the results suggested that introverts and extraverts scanned for physical features equally rapidly, but that introverts were slower than extraverts at scanning for the semantic features of category membership. There was limited support for the hypothesis that introverts, thought to be more aroused than extraverts, are less able to engage in shared or parallel processing. It was concluded that information processing in introverts and extraverts may differ qualitatively as well as quantitatively.

edit on 14-9-2014 by Cauliflower because: (no reason given)

new topics

top topics


log in