It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Fulltext PDF (Monograph)
"The survival of man depends on the early construction of an ultra-intelligent machine (...)
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of [such] machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind.
Thus the first ultraintelligent machine is the last invention that man need ever make."
Good links his ultra-intelligent machines to the survival of mankind.
originally posted by: MysterX
a reply to: jeep3r
Sooner or later, the machine will become what we consider to be a god.
It will design it's own physical form, and it will engineer undreamt of technologies to do this..it will be godlike.
And once the initial machine is built and starts making improvements upon its own design...the speed of change from the first impressive machine to the god will be more rapid than anyone could imagine.
Yudkowsky advances the Coherent Extrapolated Volition (CEV) model. According to him, our coherent extrapolated volition is our choices and the actions we would collectively take if "we knew more, thought faster, were more the people we wished we were, and had grown up closer together." Rather than a Friendly AI being designed directly by human programmers, it is to be designed by a seed AI programmed to first study human nature and then produce the AI which humanity would want, given sufficient time and insight to arrive at a satisfactory answer. The appeal to an objective though contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity. Making the CEV concept precise enough to serve as a formal program specification is part of the research agenda of the Machine Intelligence Research Institute. Other researchers believe, however, that the collective will of humanity will not converge to a single coherent set of goals.
originally posted by: gosseyn
interesting : Friendly_artificial_intelligence
"Yudkowsky advances the Coherent Extrapolated Volition (CEV) model. According to him, our coherent extrapolated volition is our choices and the actions we would collectively take if "we knew more, thought faster, were more the people we wished we were, and had grown up closer together."
originally posted by: conundrummer
I think the safest approach to avoid the Terminator/Matrix style doomsday while still giving AI engineers the chance to pursue a better world through robots is to keep it mostly virtual for its first generations.
originally posted by: MysterX
From a developmental point of view, he's right. Once the ultra-intelligent machines begin to design themselves, in theory, every generation will be better and more intelligent than the previous, so as to be so intelligent that it can conceive of anything, design anything and know everything ultimately.