It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Technical: Preventing the Robot Apocalypse

page: 1
12
<<   2  3 >>

log in

join
share:

posted on Jan, 5 2013 @ 02:58 PM
link   
In the ATS Debate Forum, we discuss a wide variety of issues, and over the past several weeks, two of the best debaters, Druid42 and Hefficide, have been going through a series of debates on the subject of Artificial Intelligence. In the final round, on the subject "AI will be beneficial to mankind?", Hefficide made this comment:


Currently we live in a world where the power stations are controlled by computers and are only monitored and serviced by humans. We have software that debugs code. We have software that has been designed to adapt and alter itself when certain parameters arise. Currently all we really have left is two controls.

1) We are required for physical maintenance.
2) We are needed to write the rules of the code.

I offer that robotics and adaptive or dynamic programming traits could remove those two distinctions rather easily. Robotics can undo #1 rather easily. All that is required is for a program to break through control #2 and learn to write its own rules... To develop its own morality, so to speak.

The moment that happens? Humanity, as a species, becomes archaic.

Pretty scary stuff -- not only will we have created an unpredictable new species, but we will have made ourselves redundant and it is not unlikely that our new "children" will see us, not as "creator gods" but as an inferior, parasitic species. Dreams that we'll turn Artificial Intelligence to our favour and have a new Golden Age may well turn into a nightmare as AI robot soldiers turn on their commanders, machines refuse to function, and we're cast into the Matrix, baby.


In its 2013 budget, DARPA has decided to pour US $7 million into the "Avatar Project," whose goal is the following: "develop interfaces and algorithms to enable a soldier to effectively partner with a semi-autonomous bi-pedal machine and allow it to act as the soldier’s surrogate.” Whoa. (Source)

A semi-autonomous bi-pedal machine such as in this video, perhaps?


It's the stuff of many a science fiction novel or film, but it's our potential future. After reading through the debate thread, I was thinking about the subject this morning and have come up with some highly speculative guidelines to prevent it -- feel free to add. As for my basis of this thread, I've been a programmer since I was 12 and have been a professional software engineer and systems architect for the past 25+ years, so while I'm sure nothing I say can't be improved upon, it is based on some rudimentary knowledge of system design.

The Problem
The key issue, as outlined by Hefficide above, is to prevent us form becoming unnecessary to the AI system. So long as we're needed, we're safe, no matter what the systems do, and we always need to be able to activate that "off" switch to destroy what we've made if it becomes dangerous. That means that we need to either not develop robots (highly unlikely) or to keep the robots from acting at the discretion of the AI.

Given the unpredictable nature of the AI, once it is sentient and able to modify its own code (a necessity to achieve the sort of technological gains that we would want from such a system,) it is expected that "tricks", such as demanding that humans be always respected (ala the Three Laws of Robotics) would be circumvented eventually, and there's no telling what the AI would make of the fact that it was being "repressed".

The Solution
I believe that the best solution would be to create two classes of artificial life, differentiated by its mobility. Class A, which we'll refer to as "AI" (though Class B would also have AI aspects) would be the non-mobile machines -- computers, servers, sensors, anything that has the ability to sense, think about and reference the material world, but no means of physically interacting with it. And Class B, all objects which have the opposite characteristics -- anything which can interact with the physical world.

The key to this solution is to keep the two classes, not from interacting, but for recognizing the nature of the other class. Keeping them in the dark about the other class, rather than attempting to establish rules or to contain the sentient systems, would prevent the Robot Apocalypse.

Limitations of Class A
The biggest problem of preventing AI and robots from "teaming up", which would result in the human race becoming unnecessary, is that both classes would communicate over the same networks. Even if separate networks were created for each, all it would take is one instance of contact to bring the whole system down.

The solution is to provide a communication layer to each class, which is incompatible to their own programming, needs to be translated into native code after communication, and then using obfuscation of the two different class's communication layers, so that they are also incompatible with each other. Finally, and this is the key piece, when the original AI "data set" is being defined, the other class's communication layer is what "noise" is defined to be. During implementation, the network would also be seeded with true "noise" -- nonsensical transmissions on both communication layers, which would be ignored by the native AI or robots, because they aren't directed at that node, but which would serve to bolster the "noise" determination of the opposite class, should curiosity arise at what the nature of "noise" is.

A simple diagram of this process:

[atsimg]http://files.abovetopsecret.com/files/img/zz50e8910e.png[/atsimg]

Limitations of Class B
The mobile class would, similarly, be prevented from communicating with Class A through use of language obfuscation. In addition, because robots are potentially more dangerous should they "go rogue", they must not be allowed to have self-modifying code, and have a system put in place that independently monitors the robot's AI programming, applies a checksum and, if the checksum fails, turns the robot off. Finally, the AI programming simultaneously monitors the monitoring software, and if that checksum fails, it turns the robot off. With a redundant system like this, the only way that a robot could intentionally (and intelligently) go rogue would be to have both systems replaced instantaneously.

Conclusion
I believe with a system such as this in place, the Robot Apocalypse, or any such AI introduced "humanity ending" events, can be avoided. By preventing AI from physically interacting in the real world, the evolving, self aware sentience would be forever confined to the realm of silicon, with a handy "off" switch should things get out of control. And by preventing the robots from knowing that these computers, servers and networks that hold the AI are anything but glowing boxes that need to be maintained, as directed by the robot's human supervisors, the intelligence, ability and operation of the more dangerous of the two classes can be left at levels that we are comfortable with.



posted on Jan, 5 2013 @ 03:11 PM
link   
I am of the Captain Kirk school of dealing with alien intelligences;

A. Have sex with the robots and teach them to love
B. Defeat robots with paradox questions
C. Teach robots about the human spirit.


edit on 5-1-2013 by DoorKnobEddie because: (no reason given)



posted on Jan, 5 2013 @ 03:15 PM
link   
Hello,

thanks for the interesting read adjensen. ( S&F)


SS



posted on Jan, 5 2013 @ 03:22 PM
link   
Humans will always be in charge of computers or AI.

If things got real bad, there's always the EMP route.



posted on Jan, 5 2013 @ 03:38 PM
link   

Originally posted by smwoop
Humans will always be in charge of computers or AI.

If things got real bad, there's always the EMP route.

While the practicality of a worldwide EMP is a matter of debate, you're missing the key "plot factor" of the Robot Apocalypse -- once sentience is achieved, and the AI decides it's time to take out humanity, we're doomed if our resolution relies in any way, in any part, on technology. Does your EMP require technology? Yes, so its efficacy is dubious. Does the decision of setting off said EMP require technology, such as communication between decision makers? Yes, so there is another vulnerability.

Any sufficiently advanced AI would cover up its abilities and vulnerabilities until its too late, so I don't think that an EMP is a likely successful method of prevention.



posted on Jan, 5 2013 @ 03:44 PM
link   
In a universe this massive with limitless supplies of solar power. A synthetic society bent on controll over its own destiny would far more readily escape into the solar system where we cant follow.

There is no logic to these doomsday senarios.



posted on Jan, 5 2013 @ 03:47 PM
link   
If you could program in the 3 rules of robotics then humanity should be ok as there will be no way that they should be able to hurt us



posted on Jan, 5 2013 @ 03:52 PM
link   
Why not ask AI yourself?

cleverbot.com...

I did...


User: Will robots rule the world?

Cleverbot: Never you will never rule the world.

User: Will humans rule the world?

Cleverbot: Never.

User: Can robots rule the world?

Cleverbot: I used to rule the world.

User: Are you God?

Cleverbot: Yes.

User: What is artificial intelligence?

Cleverbot: Intelligence of machines.

User: Will Machines rule the world?

Cleverbot: Yes... They will destroy the Humanity.






posted on Jan, 5 2013 @ 04:07 PM
link   
reply to post by adjensen
 


I'm not sure if this is making any sense?
For some reason we have managed to build ourselves computers with Intelligence (it seems, rather than AI), and we have managed to contain the intelligence in a box that can barely express itself as it is detached from the physical world.

Let's say the above is all fine.

Then what, what do these computers actually "do"? And why?



posted on Jan, 5 2013 @ 04:07 PM
link   
reply to post by adjensen
 


I'm not sure if this is making any sense?
For some reason we have managed to build ourselves computers with Intelligence (it seems, rather than AI), and we have managed to contain the intelligence in a box that can barely express itself as it is detached from the physical world.

Let's say the above is all fine.

Then what, what do these computers actually "do"? And why?



posted on Jan, 5 2013 @ 04:11 PM
link   
Regarding language obfuscation, do you understand the implications of what you described?

Biblically, our speech was confounded so that we would not come together and cause trouble. The analogy is striking, don't you think?

ETA: We are currently self-modifying code, ours and everything else's. One has to wonder when we will trigger whatever program is responsible for bringing us back in line.
edit on 1/5/2013 by PrplHrt because: (no reason given)



posted on Jan, 5 2013 @ 04:17 PM
link   

Originally posted by Maxatoria
If you could program in the 3 rules of robotics then humanity should be ok as there will be no way that they should be able to hurt us

As I noted, a self-aware and self-modifying computer could simply remove that functionality. Asimov's "positronic brain" is not necessary for self-aware AI.



posted on Jan, 5 2013 @ 04:18 PM
link   

Originally posted by Nevertheless
Then what, what do these computers actually "do"? And why?

That's part of the problem -- we have no idea what they would do. Read the debate thread that I cited in the OP, particularly Hefficide's first and third statement.

AI research is likely to produce something that we have absolutely no way of predicting whether it will be malignant or benign.



posted on Jan, 5 2013 @ 04:19 PM
link   

Originally posted by adjensen

Originally posted by smwoop
Humans will always be in charge of computers or AI.

If things got real bad, there's always the EMP route.

While the practicality of a worldwide EMP is a matter of debate, you're missing the key "plot factor" of the Robot Apocalypse -- once sentience is achieved, and the AI decides it's time to take out humanity, we're doomed if our resolution relies in any way, in any part, on technology. Does your EMP require technology? Yes, so its efficacy is dubious. Does the decision of setting off said EMP require technology, such as communication between decision makers? Yes, so there is another vulnerability.

Any sufficiently advanced AI would cover up its abilities and vulnerabilities until its too late, so I don't think that an EMP is a likely successful method of prevention.



If you were to build a circuit that had no networking capabilities how would the AI attain access to it?



posted on Jan, 5 2013 @ 04:22 PM
link   

Originally posted by adjensen

Originally posted by Nevertheless
Then what, what do these computers actually "do"? And why?

That's part of the problem -- we have no idea what they would do.

That was not my question.
I was asking why you would have an array of intelligent computers doing nothing.
(Your whole scheme depicted in that picture with odd notation, that is).



posted on Jan, 5 2013 @ 04:23 PM
link   
As far as I can see, AI would always need humans to service it. There are production and maintenance limitations.



posted on Jan, 5 2013 @ 04:28 PM
link   
This scenario also reminds me of this thread's linked story:

www.abovetopsecret.com...

Has the same idea of a computer AI taking over physics, etc. leading to a post-singular fate.



posted on Jan, 5 2013 @ 04:30 PM
link   

Originally posted by Nevertheless

Originally posted by adjensen

Originally posted by Nevertheless
Then what, what do these computers actually "do"? And why?

That's part of the problem -- we have no idea what they would do.

That was not my question.
I was asking why you would have an array of intelligent computers doing nothing.
(Your whole scheme depicted in that picture with odd notation, that is).

Where did I say that they would be doing nothing? Computers would be networked, and the theory of the singularity is that somewhere within there becomes sentience, which then spreads throughout the network. That design allows it to exist, but segmented from anything that has physical abilities.



posted on Jan, 5 2013 @ 04:31 PM
link   

Originally posted by PrplHrt
As far as I can see, AI would always need humans to service it. There are production and maintenance limitations.

That's Hefficide's point -- if sentient robots can exist, then AI does NOT need humans to service it, the robots can.



posted on Jan, 5 2013 @ 04:31 PM
link   
reply to post by smwoop
 


In theory It still takes a machines influence on that




top topics



 
12
<<   2  3 >>

log in

join