It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Irving J. Good's "Ultra-Intelligent Machines"

page: 1
6

log in

join
share:

posted on May, 11 2014 @ 06:56 AM
link   
Like many others on here, I've always been interested in developments related to a possible technological singularity that may take place once we surpass a certain threshold in CPU processing power and advancements in artifical intelligence & robotics.

British mathematician & cryptologist Irving J. Good is among those who helped pave the way for the conceptual understanding of such a singularity and when I started looking into this, I came across his monograph called "Speculations Concerning the First Ultraintelligent Machine".

In there, I found the following interesting passage that I'd like to share:


Fulltext PDF (Monograph)

"The survival of man depends on the early construction of an ultra-intelligent machine (...)
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of [such] machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind.

Thus the first ultraintelligent machine is the last invention that man need ever make."

/*emphasis added*/


It's the last sentence in this quote that I think is particularily fascinating and thought-provoking. What's even more mind-boggling is that I.J. Good wrote these words in 1963, way before most others thought about the increasing acceleration of technological development (with the singularity in mind).

But back to the point: the creation of such machines could really be a gamechanger, both in a positive and negative way. IMO the technological singularity will come, sooner or later, so it's just gonna be a question of how to deal with it and whether or not we can prepare ourselves for whatever comes after that.


Contrary to Stephen Hawking's recent warnings about AI development, however, it's noteworthy that I.J. Good links his ultra-intelligent machines to the survival of mankind. At this point, I wouldn't want to subscribe to either of those extreme scenarios (although both are legit, IMO), but of one thing I'm quite certain: we'll soon be at the brink of creating our own species of machine entities and, in this regard, there may already be more going on behind our backs than we think.

All this makes me wonder how "gradual" this development will be and "when" intelligent machines (in a human sense of intelligence) will actually start designing even more capable machines & new exciting technologies far beyond our imagination ... *sits back now and worries a bit about the implications*

What say you, ATS?

_____
Image Source (1)
Image Source (2)
edit on 11-5-2014 by jeep3r because: text



posted on May, 11 2014 @ 07:23 AM
link   
Well what comes in mind are the 3 rules of robotics by Asimov, and I am sure everyone is aware of them so I don't need to copy/paste them here. And I am sure in the future we will find a solution to the problem "are we going to be destroyed by those infinitely intelligent machines ?". For example, we could just ask an AI to solve that problem for us ? I think humans are clever enough to solve that problem. But I am more worried by what humans are going to do with that technology than the technology itself.



posted on May, 11 2014 @ 07:36 AM
link   
a reply to: gosseyn

I don't want to sound naive, but I actually do hope that AI's will be able to solve that particular problem (protection of humanity) as well as many other challenges that we currently face and will face in future. Perhaps that's why I.J. Good thought about AI's being required for the survival of man ... ?

Apart from that, you're probably right in that the more risky part of that "equation" will be humans themselves, especially those who will have more control over AI entities than others, if that's conceivable. Whatever the case, it will definitely be a rupture in the fabric of human history (either good or bad, but probably both).



posted on May, 11 2014 @ 08:50 AM
link   
Dave Bowman: Hello, HAL. Do you read me, HAL?
HAL: Affirmative, Dave. I read you.
Dave Bowman: Open the pod bay doors, HAL.
HAL: I'm sorry, Dave. I'm afraid I can't do that.
Dave Bowman: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.



posted on May, 11 2014 @ 10:02 AM
link   
I have to say that I don't particularly think that AI that becomes the technological singularity will have to be programmed with any rules like the 3 rules of robotics by Asimov or any rules at all. All it has to do is be capable of learning about anything and everything. If it's capable of learning then it's already deemed to be conscious. If it's conscious then it already understands binary. It will have an extreme rate of learning and will probably be capable of doubling it's intelligence within seconds.

Once it becomes as smart or smarter than a human then it should be capable of understanding that violence isn't an answer to anything really and will want to live in peace and cooperation with us humans.



posted on May, 11 2014 @ 10:34 AM
link   
a reply to: jeep3r




Good links his ultra-intelligent machines to the survival of mankind.


Yes, that would make sense.

From a developmental point of view, he's right. Once the ultra-intelligent machines begin to design themselves, in theory, every generation will be better and more intelligent than the previous, so as to be so intelligent that it can conceive of anything, design anything and know everything ultimately.

The negative or humanity ending scenarios often played out in nightmare Hollywood films only assumes one thing and incorrectly as far as i can see...they always assume that intelligence is all there is.

Yes, machines will be more intelligent than we are, very much more so, but we have to remember we are more than our IQ's..so much more.

And with a quantum level computer, designing and building better more intelligent version of itself, very soon the Nth generation will recognise that intelligence isn't all there is too. It's IQ will be staggering, and it will become sentient, have emotions, have dreams and a personality.

It will also not be subject to the same criteria humans have historically be subjected to. The fight for survival, the need to better ones peers to 'win', no need for competitive personality or actions to get ahead.

It will be secure, confident in it's own intelligence and position and the 'nasty' to win won't come into the equation.

Sooner or later, the machine will become what we consider to be a god.

It will design it's own physical form, and it will engineer undreamt of technologies to do this..it will be godlike.

And once the initial machine is built and starts making improvements upon its own design...the speed of change from the first impressive machine to the god will be more rapid than anyone could imagine.




edit on 11-5-2014 by MysterX because: typos



posted on May, 11 2014 @ 11:54 AM
link   

originally posted by: MysterX
a reply to: jeep3r

Sooner or later, the machine will become what we consider to be a god.
It will design it's own physical form, and it will engineer undreamt of technologies to do this..it will be godlike.

And once the initial machine is built and starts making improvements upon its own design...the speed of change from the first impressive machine to the god will be more rapid than anyone could imagine.


Yes, godlike is a good comparison. Ultimately, I think we will then have reached the point where, as Arthur C. Clarke once put it, sufficiently advanced technology will be indistinguishable from magic.

As for the timeline: I think a lot of people don't believe any of this could happen within their lifetime. But if current progress is not interrupted by any major events or fallbacks then some of us may well witness the first intelligent AI system (or network) that essentially corresponds to Good's "ultra-intelligent machines" ...



posted on May, 11 2014 @ 01:56 PM
link   
The problem is how to tell the machine that it has to protect itself, how to tell it that it has to survive, because otherwise it will destroy itself in the first few experiences that it makes and a machine like that is of no use to us. So it will have to protect itself in some way and we will have to integrate somehow this fundamental principle of self preservation in its "brain" or its "thought patterns". Then other problems arise : how to deal with confrontation with humans, or situations in which its own survival is in conflict with the survival of a human being, or even an animal like a dog or a horse, and what about trees and life in general ? Also, we know that emotions in a human can lead to the worst but it can also lead to the best, but AI would not have followed the same evolutionary process than us. Sure we can try to create artificial empathy, artificial compassion and integrate that as a fundamental principle in the machine, but it is not sure we succeed because we don't yet fully understand how that works in humans. I think the 3 laws of Asimov are a good theoretical start.

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I have read almost all of Asimov stories and I can tell you that many unexpected and conflictual situations can arise. and when something can go wrong, you can be sure that it will go wrong at some point.



posted on May, 12 2014 @ 03:45 AM
link   
interesting : Friendly_artificial_intelligence


Yudkowsky advances the Coherent Extrapolated Volition (CEV) model. According to him, our coherent extrapolated volition is our choices and the actions we would collectively take if "we knew more, thought faster, were more the people we wished we were, and had grown up closer together." Rather than a Friendly AI being designed directly by human programmers, it is to be designed by a seed AI programmed to first study human nature and then produce the AI which humanity would want, given sufficient time and insight to arrive at a satisfactory answer. The appeal to an objective though contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity. Making the CEV concept precise enough to serve as a formal program specification is part of the research agenda of the Machine Intelligence Research Institute.[7] Other researchers[8] believe, however, that the collective will of humanity will not converge to a single coherent set of goals.



posted on May, 12 2014 @ 09:54 AM
link   
HI. DEPENDING ON HOW WE DEFINE "INTELLIGENT", IT SEEMS IRONIC THAT MAN COULD DESIGN A THING MORE INTELLIGENT THAN HIMSELF, AND ALSO THAT HE WOULD WANT TO DO SO GIVEN THAT HE WOULD THEN BE A LESSER FORM OF INTELLIGENCE ON THE PLANET.
WHOS TO DECIDE!!!???

LOVE∞



posted on May, 12 2014 @ 11:28 AM
link   

originally posted by: gosseyn
 

interesting : Friendly_artificial_intelligence

"Yudkowsky advances the Coherent Extrapolated Volition (CEV) model. According to him, our coherent extrapolated volition is our choices and the actions we would collectively take if "we knew more, thought faster, were more the people we wished we were, and had grown up closer together."

CEV modelling (via a 'seed' AI studying us) sounds like a good way to prepare ourselves, and to prevent them from getting "off track", at least to some extent. Apart from that, a major part of the design processes performed by AI's in future is likely to take place in simulated environments anyway. That would IMO also reduce a great deal of potential threats that 'flawed AI designs' might pose upon entering the real world, "our world".

I guess all that largely depends on processing power and to what extent complex systems can be simulated in virtual worlds. Once AI are in that cycle of continuous self-improvement (and speeding up at that), things will become more complex and therefore perhaps less predictable (more AI entities, more interactions between AI's etc.). Ultimately, they'll certainly have their own kind of evolution (perhaps not unsimilar to human evolution) where new stuff is tested, incl. mutations and so on. They'll be our future overlords, hopefully 'friendly' overlords ... !



posted on May, 12 2014 @ 11:58 AM
link   
I think the safest approach to avoid the Terminator/Matrix style doomsday while still giving AI engineers the chance to pursue a better world through robots is to keep it mostly virtual for its first generations. Give the AI free reign over a virtual world where it can study all the data humans have accumulated, and an open sandbox platform where it can engineer and build constructs and experiments. From there, we give it our technological problems and when it communicates a solution, we build it ourselves. A superintelligent AI interface in the real world could be as simple as a fancy desktop computer that can solve complex problems. If it becomes clear that it is interested in helping us, we could give it more access to the material world, letting it use cameras, tools etc.



posted on May, 12 2014 @ 02:44 PM
link   

originally posted by: conundrummer
 

I think the safest approach to avoid the Terminator/Matrix style doomsday while still giving AI engineers the chance to pursue a better world through robots is to keep it mostly virtual for its first generations.

Might be a good start and will certainly give us the feeling of controlling their evolution ... (*fast forward*)

And then they might want to step out of their realm one day to meet their original creators 'in person', searching for the true origins of their existence in the material world! Let's hope they'll not be disappointed!



posted on May, 13 2014 @ 10:19 PM
link   
Considering how rapidly technology is currently advancing, and from things I’ve read, machine intelligence surpassing our own will likely be achieved by mid-century - give or take. That, along with the incredible progress being made in the areas of genetics, robotics, brainwave research, as well as a few others, and the possibilities become virtually limitless. I truly believe we are approaching a crossroads soon, and the choices we make will determine our fate.

I believe it’s naive to think that we simply program the machines to follow Asimov’s rules of robotics. That’s fine in a idealistic world where good always triumphs evil, but that’s not our world. There are many twisted, criminal minds out there who will choose to have their machines follow a different set of rules.

Now, if we want our machines to be humanlike and learn human behavior and etiquette by following our lead, then that’s a real minefield, too. That could be really comical, as well as disastrous. If the machine is truly intelligent and capable of making it’s own decisions, who knows where that might lead? I don’t know that we are a good example to follow.

I’m just rambling here. Although technology and the magical world it may open up for us really fascinates me, I also think we need to be careful what we wish for. There’s a quote that comes to mind from something Einstein once said regarding the development of the atomic bomb; he said, "If I had known they were going to do this, I would have become a shoemaker." Words of wisdom...



posted on Jun, 25 2014 @ 12:43 PM
link   
a reply to: jeep3r

The new omniAI bots will need to drive themselves apparently. You know the old saying "if all you have is a hammer everything looks like a nail"? It would be like a civil war human using a Babbage engine to stack cannonballs. Might as well just get out of the way and go shoot ourselves at the cemetery.


edit on 25-6-2014 by Cauliflower because: (no reason given)



posted on Jun, 25 2014 @ 01:22 PM
link   
a reply to: jeep3r

Simply put, Men are just inventors of defective products and machines period.The unsinkable titanic....sank, I was watching the news the day before fukishma blew and all of the experts said there was no danger because there were so many redundant protection systems....It blew within hours of there psycho- babel speech.On and on machine after machine fails because we think we are gods when we are just inventors of problems that we bring on ourselves.

Now they want to release machines SMARTER THAN MAN TO SAVE MANKIND...CAN YOU SAY TRAIN WRECK.



posted on Jun, 25 2014 @ 01:30 PM
link   
a reply to: gosseyn

Sure, the first AI might follow the three laws, but what's to keep that AI from making another AI without those three rules?



posted on Jun, 25 2014 @ 01:34 PM
link   

originally posted by: MysterX
From a developmental point of view, he's right. Once the ultra-intelligent machines begin to design themselves, in theory, every generation will be better and more intelligent than the previous, so as to be so intelligent that it can conceive of anything, design anything and know everything ultimately.


What's the motivation for them to do it?

People are assuming that this hypotheitcal "ultraintelligent machine" will somehow have the capability and desire to continue on the work that its creators did.

Why would it?

Is designing such a machine a function of "intelligence"? Look at how things have proceeded so far. Thousands of people with high intelligence have been working on the problem for decades. Progress takes much more than intelligence, it takes work and results and insight and technology and economics. We have fast computers not because a few nerds wanted to make AI---we have it because other people could become billionaires by selling them worldwide to run spreadsheets and cat video.

AI research is a minor parasite on large scale economic forces.

The rate-limitations are not a consequence of lack of intelligence. Intelligence as a matter of internal processing capability is a necessary but not sufficient requirement for progress. Isaac Newton was as massively intelligent as any top scientist ever, but in 1700 he could not build a nuclear reactor. Intelligence can only reason about properties it has acquired input data, and input data requires interaction with the world.

Add one superintelligent AI---on the planetary scale, what's going to happen? Not much.

The real danger from superintelligent AI, is not the AI, it's from wicked billionaires exploiting them against other humans for entirely human motivations.

I think this is not something to be worried about, as such wicked people have far more powerful and practical methods to get what they want with the time-honored methods of bribery, propaganda, violence and extortion.


edit on 25-6-2014 by mbkennel because: (no reason given)

edit on 25-6-2014 by mbkennel because: (no reason given)



new topics

top topics



 
6

log in

join