It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Technical: Preventing the Robot Apocalypse

page: 2
12
<< 1    3 >>

log in

join
share:

posted on Jan, 5 2013 @ 04:34 PM
link   
reply to post by adjensen
 

We can limit access to resources, thereby limiting AI's ability to service itself.

Cut power, limit access to raw materials and components, etc.



posted on Jan, 5 2013 @ 05:25 PM
link   

Originally posted by adjensen
Where did I say that they would be doing nothing?

You said that you have intelligence in a box. What is the point? What does it do? Why did you install it?
Why is it even plugged in?



posted on Jan, 5 2013 @ 06:40 PM
link   
Nice follow up on an interesting topic.

On the contrary, I remain optimistic, due to the very constraints morality places upon such a system. The whole concept of AI is hinged upon a moral self-awareness, and you cannot be self-aware without the basics of a moral system already in place.

Truly, it separates us from the other mammals. Higher brain functions aren't the key, but the coherence of self that allows identity.

I think we are safe under the "self-indentity clause" of AI.



posted on Jan, 5 2013 @ 07:25 PM
link   

edit on 1/5/2013 by PrplHrt because: (no reason given)



posted on Jan, 5 2013 @ 08:04 PM
link   

Originally posted by Nevertheless

Originally posted by adjensen
Where did I say that they would be doing nothing?

You said that you have intelligence in a box. What is the point? What does it do? Why did you install it?
Why is it even plugged in?

Here is an overview: Technological Singularity. By some philosopher's and technologist's projections, it's coming whether we like it or not, and whether we have any input on it or not.



posted on Jan, 5 2013 @ 10:56 PM
link   
I've been working on the development of a custom neural net framework the last few weeks. Training the neural nets works via genetic algorithms which make use of "DNA code" for breeding and mutation processes. It's basically survival of the fittest for AI. You can blame me when self-aware robots start running around.


Seriously though in the process of building this system I've been thinking a lot about the viability of creating some sort of sentient or self-aware machine. I used to think it would be a relatively simple feat, we just needed fast enough computers. Then I got to really thinking about it more and realized it's not that simple.

First of all there are some important distinctions we need to make when we talk about "self-aware AI" and "self-learning AI". They are two completely different things ok. Self-learning AI does not have to be self-aware. It's rather simple to create a program which can learn and get better at things without help from a programmer.

But that's far from sentience or self-awareness. It's just the illusion of self-awareness because it appears to learn and adapt to new things. I realized as I was programming my neural network system, that anything I create on my computer will never become self-aware. It will adapt and change but it'll never do anything more.

At the end of the day it's going to take input and give output based on a linear deterministic system, nothing more. It all just boils down to a set of calculations and nothing more. Now some of you may argue that consciousness is nothing but a set of calculations, and that may be true, but it's not a set of deterministic linear calculations.

Consciousness, at the very least, is powered by non-linear quantum calculations, carried out on immensely powerful organic machinery (the brain does have quantum components). So what this means in my mind is that we'll never develop true self-aware AI until we develop very powerful quantum computers.

True consciousness is not just a set of calculations being carried out by your brain, it's something more, something which cannot really be quantized. As long as we stick with self-learning AI and not self-aware AI we will be fine. The problem arises when they become self-aware; then there's no stopping them.

See, self-learning AI still learns what we tell it to learn, it still solves what we tell it to solve, it still switches off when we hit the button. Self-aware AI would learn what ever it wanted to learn, solve any problem it wanted to solve, and there's no guarantee it would even switch off when we hit the button.

Really the problem seems to be making sure none of our AI algorithms become self-aware in the first place. Self-learning AI systems obviously do have the capacity to adapt so much that they become self-aware through a type of natural evolutionary process. In fact I don't see any other way that self-aware AI will come about.
edit on 6/1/2013 by ChaoticOrder because: (no reason given)



posted on Jan, 5 2013 @ 11:34 PM
link   
reply to post by ChaoticOrder
 

Do you think the military would stop short of designing such a system?

I don't. I think they probably already have such a computer.

Remember, they are supposedly 20 to 30 years ahead of civilian development in any given area.



posted on Jan, 5 2013 @ 11:46 PM
link   

Originally posted by PrplHrt
reply to post by ChaoticOrder
 

Do you think the military would stop short of designing such a system?

I don't. I think they probably already have such a computer.

Remember, they are supposedly 20 to 30 years ahead of civilian development in any given area.

Good point, they probably do have fairly powerful quantum computers. I mean even in the commercial sector we're starting to see some very basic quantum computers pop up.

[moved comment to previous post]
edit on 6/1/2013 by ChaoticOrder because: (no reason given)



posted on Jan, 5 2013 @ 11:52 PM
link   
reply to post by ChaoticOrder
 

The next question is somewhat rhetorical.

Do you think they would use it against us?

I think they already are. There are a lot of weird things going on.



posted on Jan, 5 2013 @ 11:58 PM
link   

Originally posted by PrplHrt
reply to post by ChaoticOrder
 

The next question is somewhat rhetorical.

Do you think they would use it against us?

I think they already are. There are a lot of weird things going on.

I'm not entirely sure how they could actually use it against us. I'm not entirely sure it would even desire to be used against us. You must remember that self-aware beings have free will and act according to their desires.
edit on 6/1/2013 by ChaoticOrder because: (no reason given)



posted on Jan, 6 2013 @ 02:03 AM
link   

Originally posted by adjensen

Originally posted by Nevertheless

Originally posted by adjensen
Where did I say that they would be doing nothing?

You said that you have intelligence in a box. What is the point? What does it do? Why did you install it?
Why is it even plugged in?

Here is an overview: Technological Singularity. By some philosopher's and technologist's projections, it's coming whether we like it or not, and whether we have any input on it or not.


I'm still not questioning AI.
I'm still asking you why you have those computers plugged into such a network. What is the purpose?



posted on Jan, 6 2013 @ 10:00 AM
link   

Originally posted by adjensen

Originally posted by Nevertheless

Originally posted by adjensen
Where did I say that they would be doing nothing?

You said that you have intelligence in a box. What is the point? What does it do? Why did you install it?
Why is it even plugged in?

Here is an overview: Technological Singularity. By some philosopher's and technologist's projections, it's coming whether we like it or not, and whether we have any input on it or not.


While I like the premise of a "singularity", we can't be too drawn up in buzz-words used by the transhumanist groups. There's entirely too much speculation involved to say that by such and such a date event A will occur. Their agenda (the transhumanists) are to ensure and envision a society coupled to electronics.

The mating of biology to "wetware" is a nice Sci-fi theme, but there's too many real life problems to consider a chip being easily implanted in your brain and to interface with it. The majority of the population will not want it. Those who do are liable to be labeled as an evolutionary branch, somehow different.

I could go on.



posted on Jan, 6 2013 @ 12:27 PM
link   

Originally posted by Nevertheless
I'm still not questioning AI.
I'm still asking you why you have those computers plugged into such a network. What is the purpose?

I'm not sure why you are asking why "I" have computers plugged into a network, this isn't about me. However, why would these AI machines be networked? Because computers are networked, generally? Why are the computers that monitor and operate the power grid on the Internet? Because people are shortsighted and careless -- so it's better to assume that the AI computers would be on the network and defend from that perspective.

The alternative is to quarantine the computers that the AI is on, running their own subnet, and hope that no one ever screws up and exposes any of those systems to something that would allow access to the outside world, whether WiFi network, dual networked "dumb" (thought to be dumb, anyway) PC, a USB thumb drive, etc.

Because people never screw up, right?



posted on Jan, 6 2013 @ 12:30 PM
link   

Originally posted by Druid42
While I like the premise of a "singularity", we can't be too drawn up in buzz-words used by the transhumanist groups.

Well, to be fair, the technological singularity isn't a concept owned by the transhumanists -- it could well be an intelligence that has no reliance on a biological aspect.



posted on Jan, 6 2013 @ 12:38 PM
link   
This can be a bit scary, and it can be promising. The unintended consequences are the scary part. Murphy's law should always be obeyed in any case. Here is a story that is a bit scary if you think about it. I wonder what could go wrong.

The IEEE – a large, global professional organization dedicated to advancing technology for humanity – have found that advancements in small robots, ranging from nanorobots to shoebox-sized robots, hold promise for delivering innovative and life-altering future applications. This mini-series explores the use for nanorobots in healthcare, morphogenic robots and robots for search and rescue.

urbantimes.co...



posted on Jan, 6 2013 @ 12:49 PM
link   

Originally posted by adjensen
I'm not sure why you are asking why "I" have computers plugged into a network, this isn't about me.

Yes it is, "you" provided a drawing that shows how it is possible to have networked intelligent computers and robots that maintain them without things getting out of hand.

If the intelligent computers cannot do anything with the physical world, why did you build a whole network of them, including robots that maintain them? Why not think "green" and have them powered off.
Please explain the (or a) purpose of this network.



However, why would these AI machines be networked? Because computers are networked, generally?

Yes, computers are networked for a reason. Because the computers themselves are usable (by users), and it's useful to be able to reach other computers. Both for the computers themselves, and for the users.



Why are the computers that monitor and operate the power grid on the Internet?

Because it's very convenient and can't really be done otherwise for practical reasons.



Because people are shortsighted and careless -- so it's better to assume that the AI computers would be on the network and defend from that perspective.

Fine. So the intelligent computers in your network are isolated. What exactly is the sensory data from the real world, how does it interact. Watch TV?



posted on Jan, 6 2013 @ 01:11 PM
link   

Originally posted by Nevertheless
Please explain the (or a) purpose of this network.

The point is not that they need to be networked, the point is that they would be.

Put another way -- let's say that you have an AI running on some machines that are quarantined, how will you ensure that the quarantine would never be broken, accidentally or intentionally? Because once the quarantine is broken, and the AI gets "out into the wild", there's no going back.


Fine. So the intelligent computers in your network are isolated. What exactly is the sensory data from the real world, how does it interact. Watch TV?

Apart from not being able to manipulate physical objects, and communicate electronically with technology that can manipulate physical objects, I don't know that any restriction need be put on how the system interacts.



posted on Jan, 6 2013 @ 01:45 PM
link   

Originally posted by adjensen

Originally posted by Nevertheless
Please explain the (or a) purpose of this network.

The point is not that they need to be networked, the point is that they would be.

Put another way -- let's say that you have an AI running on some machines that are quarantined, how will you ensure that the quarantine would never be broken, accidentally or intentionally? Because once the quarantine is broken, and the AI gets "out into the wild", there's no going back.


You are drifting off again.
Please explain what the point of the computers are. Networked or not.
What are they doing and why do they need to be maintained by robots?




Apart from not being able to manipulate physical objects, and communicate electronically with technology that can manipulate physical objects, I don't know that any restriction need be put on how the system interacts.

So it can talk (audio) and present whatever it wishes visually (video) to have conversations with human beings?
It is allowed to persuade people into giving access to the internet, or simply spreading itself outside of the quarantine?



posted on Jan, 6 2013 @ 03:32 PM
link   
reply to post by Nevertheless
 


Why are you acting like I've designed the AI and it's sitting in my basement? I have no idea what people would be using it for, or how it would be designed -- I'm just suggesting a way to keep it under control.



posted on Jan, 7 2013 @ 12:09 PM
link   

Originally posted by adjensen
reply to post by Nevertheless
 


Why are you acting like I've designed the AI and it's sitting in my basement?

I'm not acting. You did provide a diagram of a solution. That's why I'm asking what practical use such a setup would have. If you cannot give an answer to that, how is it a solution to anything?



I have no idea what people would be using it for, or how it would be designed

Then what is the purpose of this solution?



I'm just suggesting a way to keep it under control.

Even if it is allowed to persuade human beings to not keep it under control?




top topics



 
12
<< 1    3 >>

log in

join