It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Should humanity sanction the creation of intelligent machines? That's the pressing issue at the heart of the Oxford philosopher Nick Bostrom's fascinating new book, Superintelligence. Bostrom cogently argues that the prospect of superintelligent machines is "the most important and most daunting challenge humanity has ever faced." If we fail to meet this challenge, he concludes, malevolent or indifferent artificial intelligence (AI) will likely destroy us all.
About 10 percent of AI researchers believe the first machine with human-level intelligence will arrive in the next 10 years. Fifty percent think it will be developed by the middle of this century, and nearly all think it will be accomplished by century's end. Since the new AI will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds. The resulting entity, Bostrom asserts, will be "smart in the sense that an average human being is smart compared with a beetle or a worm." At computer processing speeds a million-fold faster than human brains, Machine Intelligence Research Institute maven Eliezer Yudkowsky notes, an AI could do a year's worth of thinking every 31 seconds.
Bostrom charts various pathways toward achieving superintelligence. Two, discussed briefly, involve the enhancement of human intelligence. In one, stem cells derived from embryos are turned into sperm and eggs, which are combined again to produce successive generations of embryos, and so forth, with the idea of eventually generating people with an average IQ of around 300. The other approach involves brain/computer interfaces in which human intelligence is augmented by machine intelligence. Bostrom more or less dismisses both the eugenic and cyborgization pathways as being too clunky and too limited, although he acknowledges that making people smarter either way could help to speed up the process of developing true superintelligence in machines.
Bostrom's dismissal of cyborgization may be too hasty. He is right that the crude interfaces currently used now to treat such illnesses as Parkinson's disease pose considerable medical risks, but that might not always be so. He also argues that even if the interfaces could be made safe and reliable, the limitations on the processing power of natural brains would still preclude the development of superintelligence. Perhaps not. Later in this century, it may be possible to inject nanobots that directly connect brains to massive amounts of computer power. In such a scenario, most of the intellectual processing would be done by machines while the connected brains become the values and goal center guiding the cyborg.
In any case, for Bostrom there are two main pathways to superintelligence: whole brain emulation and machine AI.
Whole brain emulation involves deconstructing an actual human brain down to the synaptic level and then digitally instantiating all the three-dimensional neuronal network of the trillions of connections in a computer with the aim of making a digital reproduction of the original intellect, with memory and personality intact. As an aside, Bostrom explores a dystopian possibility in which billions of copies of enslaved virtual brain emulations compete economically with human beings living in the physical meatspace world. The results make Malthus look like an optimist. Bostrom more extensively explores another pathway, in which an emulation is uploaded into a sufficiently powerful computer such that the new digital intellect embarks on a process of recursively bootstrapping its way to superintelligence.
In the other pathway, researchers combine advances in software and hardware to directly create a superintelligent machine. One proposal is to create a "seed AI," somewhat like Turing's child machine, which would understand its own workings well enough to improve its algorithms and computational structures enabling it to enhance its cognition to achieve superintelligence. A superintelligent AI would be able to solve scientific mysteries, abate scarcity by generating a bio-nano-infotech cornucopia, inaugurate cheap space exploration, and even end aging and death. But while it could do all that, Bostrom fears it will much more likely regard us as nuisances that must be swept away as it implements its values and achieves its own goals. And even if it doesn't target us directly, it could simply make the Earth uninhabitable pursues its ends—say, by tiling the planet over with solar panels or nuclear power plants.
originally posted by: XxRagingxPandaxX
a reply to: neoholographic
AI is an impossibility. But if it wasn't, it would be prudent to never find out.
originally posted by: Specimen
Idk, the organ harvesting, brain smoothy drinking aliens wouldn't like it.
I think it will be a long time before an actual A.I will come into existence. Augmentation however seems like it could happen a lot more sooner.
Memory scanning, introversion-extraversion, and levels of processing
Michael W Eysenck
Michael W Eysenck
M.Christine Eysenck
M.Christine Eysenck
Birkbeck College, University of London England
Journal of Research in Personality 01/1979; DOI: 10.1016/0092-6566(79)90021-7
ABSTRACT Individual differences in information processing were studied in the form of the hypothesis that arousal, as indexed by a personality measure of introversion-extraversion, affects the speed with which certain kinds of processing are completed. The Sternberg paradigm was used, and the results suggested that introverts and extraverts scanned for physical features equally rapidly, but that introverts were slower than extraverts at scanning for the semantic features of category membership. There was limited support for the hypothesis that introverts, thought to be more aroused than extraverts, are less able to engage in shared or parallel processing. It was concluded that information processing in introverts and extraverts may differ qualitatively as well as quantitatively.