It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Originally posted by Paul_Richard
The biggest flaws in the matrix-terminator overthrow idea are of programming and maintenance. The computers would have to out think their makers (and other makers) and be able to maintain themselves without human intervention, which is by no means an easy thing to accomplish.
Originally posted by Gazrok
We'll never be dumb enough to not have an "off" switch... I hope...
Develop metrics, measures, data, and analysis methods to quantitatively evaluate component technologies and integration strategies in order to accelerate the development of intelligent behaviors in unmanned vehicle systems
Develop (learning-based) software technologies required for robust perception-based autonomy.
Originally posted by Ouizel
Even if the problems with the premise that are presented here didn't exist, what makes you think that if artificial intelligence became self-aware, it would want to destroy humans? In my opinion, that is the biggest flaw in those movies. There is no precedent to assume that machines would want to eliminate us. Furthermore, intelligence generally seeks intelligence, for discussions, etc, so there is every reason to assume that if AI becomes self aware, it would likely be non-violent, since violence generally causes more problems than it solves. Violence is generally emotional, and if AI should happen to become self aware, what's saying that it would be emotional? What part of the on/off switch equates to imbalanced hormones? What part equates to anger and fear? There is no equivalent for these things in a machine, hence, there is no reason to assume that it would want to destroy it's creators.