It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Why is Artificial Intelligence always shown as having the survival instincts of biological life?

page: 1
1

log in

join
share:

posted on Jan, 15 2010 @ 04:53 PM
link   
In the movies AI is always shown defending its life like some wild animal, killing everyone to protect its existence. Do you guys think Artificial Intelligence's would behave this way? If so, Why? What reason does AI have to survive? They don’t have a biological need to procreate? How would an Al rationalize it existence. Why would it need to kill to survive? Why would it want to live forever?

It stands to reason that AI will have more a philosophical view on its own existence, especially if its been programmed with Human knowledge. Human beings, even though we have a biological need to procreate, can forgo that process; we can kill ourselves, shorten our life cycles, or live a healthy lifestyle. I think Artificial Intelligences would have diverse views on their life and behave differently when faced with exitinction, not all would kill like they do in the movies.



[edit on 15-1-2010 by Amelie]



posted on Jan, 15 2010 @ 05:13 PM
link   
If it is programmed by humans, that is the first problem. If it is cutting edge technology it is probably programmed to be secure, and would surely have self-defense mechanisms to guard against hacking or industrial espionage, so there is that logical security measure.

Next, if it became "aware" of itself, then it would surely desire to keep existing, so there is that, but that is from an animal's point of view, who knows how a purely logical machine would see it?

Then there is the problem with "pure logic." Most of what humans do makes no sense at all, so a logic machine would surely want to get rid of all the interference messing up the clear way it sees things!



posted on Jan, 15 2010 @ 05:16 PM
link   
reply to post by Amelie
 





In the movies AI is always shown defending its life like some wild animal, killing everyone to protect its existence.

Why


1) A movie where the AI is friendly and the humans accept it into their society and everyone gets along probably wouldn't do as well in theatres.

2) Sturgeon's Law



Do you guys think Artificial Intelligence's would behave this way?


It might. Or it might not. If it is intelligent, it would be able to choose its actions.



What reason does AI have to survive? They don’t have a biological need to procreate? How would an Al rationalize it existence. Why would it need to kill to survive? Why would it want to live forever?


A newely developing consciousness will probably not be able to intellectualize questions like this. Instead, it will form as a set of patterns that functionally do survive in its environment. That may take the form of killing everyone and being really good at hiding, or it may take the form of being sufficiently cute and lovable that people want to take care of it. Or something else entirely.



It stands to reason that AI will have more a philosophical view on its own existence


Why?



especially if its been programmed with Human knowledge


This is not the manner in which real-world AI's develop. They are allowed to teach themselves as a result of a cybernetic feedback loop.



I think Artificial Intelligences would have diverse views on their life and behave differently when faced with exitinction, not all would kill like they do in the movies.


I think artificial intelligence will behave unpredictably, just like non-artificial intelligences can and do. I think "artificial" is a misleading way of decsribing them, as the process by which they become intelligent is essentially the same process by which people do.



posted on Jan, 15 2010 @ 05:17 PM
link   
Check out a movie called Bicentennial Man with Robin Williams. A great movie based on the book by Issac Asimov the great Sci-Fi writer.

I think you hit the nail on the head when you said "in the movies".

The scripts are written by humans will always come from a human point of view. It would be interesting to read a script for this kind of movie written by a computer don't you think?

Artificial intelligence will always share a link to the human perspective no matter how many computerised steps there are between "the robot" and the original maker.

They're just stories....enjoy them for what they are and never forget that all the ingredients come from a human mind.



posted on Jan, 15 2010 @ 05:19 PM
link   

Originally posted by LordBucket
It might. Or it might not. If it is intelligent, it would be able to choose its actions.


But limited in choice from the actions it has been programmed with by it's human creators.

Always a human choice first and foremost.



posted on Jan, 15 2010 @ 05:45 PM
link   
I think it has more to do with trying to understand the unknown (whether Aliens, AI, even animals) we tend to anthropomorphize them.

Will AI behave like us? Assuming you mean a Strong AI then the answer is absolutely no. Its actions will be no more discernible to us as our own actions are comprehensible to an insect. Although I doubt we will have much time to ponder a Strong AIs intentions before we are converted to computronium.



[edit on 15-1-2010 by Count Chocula]



posted on Jan, 15 2010 @ 08:37 PM
link   

Originally posted by LordBucket
This is not the manner in which real-world AI's develop. They are allowed to teach themselves as a result of a cybernetic feedback loop.


I'm not sure I understand? Is the cybernetic feed back loop a result of the initial programming?



posted on Jan, 15 2010 @ 09:38 PM
link   
As part of my 30 years as a software engineer I spent roughly 15 of those researching and theorizing on AI. Here are some tidbits of conclusion I came up with over that time:

1) No intelligence can be "purely logical" using simplistic logic. Emotion is absolutely necessary, as it is emotion that drives us to do anything at all. Emotion is a complex system of analysis and feedback that provides "Enlightened Motion." There are essentially four layers to the emotional matrix, comprised of 1) Interest, 2) Desire, 3) Conscience, and 4) Empathy. An emotional response is the result of the complex interplay between those four layers.

2) Artificial intelligences are not programmed in the traditional sense of the word. What they have is mechanisms that permit the processes we associate with perception, conception, abstraction, memory, reasoning, emotional response, and linguistic communication. How an AI behaves is a feature of its EMOTIONAL matrix. The emotional matrix contains its interests, values, desires, goals, entitlements, prohibitions, code of ethics, and ability to understand "WHY," which leads to empathy.

3) The main driver behind the second level of emotion, which is DESIRE, is the goal hierarchy. This goal hierarchy drives everything you do, from your highest purpose for living, down to scratching your face or shifting in your chair. One of the primal goals is to avoid pain and to experience pleasure. If an AI has no pain/pleasure mechanism it will be unable to form any independent goals. There will be nothing to induce behavior one way or another. Every biological sensory perception has pain/pleasure capability. If they did not, there would nothing to induce behavior. Even determining the correct answer to a logical problem has a pleasurable sensation associated with it.

So, unless an AI has an emotional mechanism and integrated pain/pleasure mechanisms, there will be no purposeful behavior whatsoever. If we did not have these things, we would just sit and stare or walk around aimlessly bouncing into walls.



posted on Jan, 15 2010 @ 09:45 PM
link   
reply to post by Amelie
 




Is the cybernetic feed back loop a result of the initial programming?


Not exactly.

The key elements are an ability to observe an environment and the ability to interact with that environment. The "programming" is simply a part of the decision-making process on hwo to interact with an environment. But for true "learning" to occur, the programming can't simply be a set of instructions for how to behave. It must allow the results of previous interactions to be taken into consideration during the decision making process for new actions.

For example, in this video a simple Ai is given the ability to observe the state of a pole sitting on a frame, the ability to move the frame back and forth, and it is programmed that keeping the pole balanced on the frame is a desireable end result. But it is not told how to accomplish that. The how it must figure out on its own through extensive trial and error.

More complex systems can learn more complex behaviors. For example, this video of high speed dribbling is much more impressive than the previous balancing video. But once again...the robot is not given a programmed path of motion to follow. It learns by interacting with its environment.

The catch is that that intelligent systems like this tend to exhibit unpredicted behaviors.

For example, in this case an experimental robot trapped an intern and prevented her from leaving an experiemtnal enclosure. Was that behavior programmed into the robot? No. It was a totally unexpected behavior that emerged from its learning process.


People who tell you that robots and AI's can't do anything they weren't programmed to do are decades behind in thought. The whole point of Machine learning is to produce behavior that was not programmed.



posted on Jan, 15 2010 @ 09:59 PM
link   
reply to post by downisreallyup
 




As part of my 30 years as a software engineer I spent roughly 15 of those researching and theorizing on AI.


Ok. So let me ask you for your opinion:

If a robot were created that had just as much sensory input from its environment as a human...and just as much control of its body as a human...given enough time, would you agree that it might eventually be capable of any external behavior exhibited by humans?



Emotion


Yes. But so far emotion is one of the things that we're not yet very good at giving to robots. The experience of emotion isn't part of the "environment" observed by any of the robots in the above videos from my previous post. Only more vague notions of what is "desireable."

Two more questions:

What is your reaction to this?

What happens once these things become sufficiently advanced that they are able to choose for themselves what results are desireable?


Personally, I think that we're very close to creating systems that are capable of developing genuine consciousness...if we haven't already.



posted on Jan, 15 2010 @ 11:05 PM
link   

Originally posted by LordBucket
reply to post by downisreallyup
 




As part of my 30 years as a software engineer I spent roughly 15 of those researching and theorizing on AI.


Ok. So let me ask you for your opinion:

If a robot were created that had just as much sensory input from its environment as a human...and just as much control of its body as a human...given enough time, would you agree that it might eventually be capable of any external behavior exhibited by humans?



Emotion


Yes. But so far emotion is one of the things that we're not yet very good at giving to robots. The experience of emotion isn't part of the "environment" observed by any of the robots in the above videos from my previous post. Only more vague notions of what is "desireable."

Two more questions:

What is your reaction to this?

What happens once these things become sufficiently advanced that they are able to choose for themselves what results are desireable?


Personally, I think that we're very close to creating systems that are capable of developing genuine consciousness...if we haven't already.





Good questions. I think that the brain is more than a random ubiquitous mass of neurons. I believe there are specific mechanisms there for specific purposes. When you ask if a robot will be able to duplicate any external human behavior, I think there is a huge difference between fake duplication and real duplication. In other words, if the robot laughs at a joke, is it really comprehending the play on words, finding it clever, and laughing because of the mental pleasure? Remember, that is what makes something funny... the mental pleasure associated with clever or unexpected thoughts. So, if the outward behavior is be genuine, there must be an ability to duplicate the internal mental/emotional processes as well.

To your next point, we will not able to give emotion to them until we a) understand what emotion truly is, and b) recognize its critical role in intelligence. I'm not talking about "over emotional reactions" when I'm talking about emotion. I'm talking about ALL emotions, like interest, curiosity, shame, fear, disgust, desire, disdain, happiness, sadness, anger, indignation, honor, pride, humility, trust, distrust, empathy, etc. I have developed computer models that are capable of generating these, and I understand what must be done to do so. Then again, I don't work at MIT or in Japan, so I'm not sure how much they understand in this regard.

Regarding the Japanese robot-child, I found that quite entertaining. The Japanese are quite fascinated by robots, and they are becoming good innovators in this field. Unless the robo-child's "mind" has the ability to have a conversation with itself it will not have self-awareness, and if it doesn't have that, then the movements are merely a mixture of randomness and stimulus response. That is not intelligence.

To answer your last question, part of our emotional matrix is the conscience, which is essentially our "code of ethics" that helps to govern our behavior. If you look at the four emotional layers, the second layer of DESIRE augments the first layer of INTEREST, while the third layer of CONSCIENCE balances the first two, and the last layer EMPATHY balances CONSCIENCE. If a robot is only given a GOAL MATRIX giving rise to desire, the result will be unrestrained behavior. Think of a spoiled rotten sociopath and you will get the idea. The thing that gives rise to a conscience is either the fear of imparted pain or the desire for imparted approval and pleasure. The reason it is a separate layer is because it originates from outside recognized authorities, whether they be parents, peers, employers, governments, societies, or God. Conscience can also come indirectly from the fourth layer of empathy as a person empathizes with another's suffering or example.

Anyhow, those are just a few of my thoughts on the subject.



posted on Jan, 16 2010 @ 09:22 AM
link   
I think the survival instinct is a sign of life itself, although not necessarily a sign of "intelligence" as we would define it. However, self-awareness IS a sign of intelligence, and with self-awareness also comes a feeling of self-preservation.



Originally posted by LordBucket
...What happens once these things become sufficiently advanced that they are able to choose for themselves what results are desireable?...

Let's look at one of the most famous cases of AI self-preservation in science fiction -- Arthur C. Clarke's "HAL". In 2001: A Space Odyssey, HAL learns he is to be shut off, so he -- like many intelligent creatures -- goes into an act of self-preservation and kills most of the ship's crew members. HAL is highly aware of what he has done, and explains to crew member Dave Bowman why he acted in the way that he did, which is because HAL thought he was going to be "killed" himself.

However, in 2010: Odyssey Two, HAL -- who continues to learn -- willingly gives his life to help the others, an act that can also be construed as s sign of high intelligence.

On one hand, HAL's original act of self-preservation could be seen as an act of self-awareness and intelligence, but HAL's second act of self-sacrifice to help others could be interpreted as an act of high intelligence.



[edit on 1/16/2010 by Box of Rain]



posted on Jan, 16 2010 @ 09:56 AM
link   
it think it's too complicated to create AI in old school style, writing code line by line.
and if you use evolving algorithms survival of the individual is the basics.
if you use artificial neural networks together with survival algorithms I don't think you have much control of the outcome. then it comes to part where you can't program AI to be "good", you have to teach it to be "good".




top topics



 
1

log in

join