It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
In the movies AI is always shown defending its life like some wild animal, killing everyone to protect its existence.
Why
Do you guys think Artificial Intelligence's would behave this way?
What reason does AI have to survive? They don’t have a biological need to procreate? How would an Al rationalize it existence. Why would it need to kill to survive? Why would it want to live forever?
It stands to reason that AI will have more a philosophical view on its own existence
especially if its been programmed with Human knowledge
I think Artificial Intelligences would have diverse views on their life and behave differently when faced with exitinction, not all would kill like they do in the movies.
Originally posted by LordBucket
It might. Or it might not. If it is intelligent, it would be able to choose its actions.
Originally posted by LordBucket
This is not the manner in which real-world AI's develop. They are allowed to teach themselves as a result of a cybernetic feedback loop.
Is the cybernetic feed back loop a result of the initial programming?
As part of my 30 years as a software engineer I spent roughly 15 of those researching and theorizing on AI.
Emotion
Originally posted by LordBucket
reply to post by downisreallyup
As part of my 30 years as a software engineer I spent roughly 15 of those researching and theorizing on AI.
Ok. So let me ask you for your opinion:
If a robot were created that had just as much sensory input from its environment as a human...and just as much control of its body as a human...given enough time, would you agree that it might eventually be capable of any external behavior exhibited by humans?
Emotion
Yes. But so far emotion is one of the things that we're not yet very good at giving to robots. The experience of emotion isn't part of the "environment" observed by any of the robots in the above videos from my previous post. Only more vague notions of what is "desireable."
Two more questions:
What is your reaction to this?
What happens once these things become sufficiently advanced that they are able to choose for themselves what results are desireable?
Personally, I think that we're very close to creating systems that are capable of developing genuine consciousness...if we haven't already.
Originally posted by LordBucket
...What happens once these things become sufficiently advanced that they are able to choose for themselves what results are desireable?...