It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
In the dark, in the silence, in a blink, the age of the autonomous killer robot has arrived. It is happening.
They are deployed. And – at their current rate of acceleration – they will become the dominant method of war for rich countries in the 21st century. These facts sound, at first, preposterous. The idea of machines that are designed to whirr out into the world and make their own decisions to kill is an old sci-fi fantasy: picture a mechanical Arnold Schwarzenegger blasting a truck and muttering: "Hasta la vista, baby." But we live in a world of such whooshing technological transformation that the concept has leaped in just five years from the cinema screen to the battlefield – with barely anyone back home noticing.
The Nato forces now depend on a range of killer robots, largely designed by the British Ministry of Defence labs privatised by Tony Blair in 2001. Every time you hear about a "drone attack" against Afghanistan or Pakistan, that's an unmanned robot dropping bombs on human beings. Push a button and it flies away, kills, and comes home. Its robot-cousin on the battlefields below is called SWORDS: a human-sized robot that can see 360 degrees around it and fire its machine-guns at any target it "chooses".
At the moment, most are controlled by a soldier – often 7,500 miles away – with a control panel. But insurgents are always inventing new ways to block the signal from the control centre, which causes the robot to shut down and "die". So the military is building "autonomy" into the robots: if they lose contact, they start to make their own decisions, in line with a pre-determined code.
We know the programming of robots will regularly go wrong – because all technological programming regularly goes wrong. Look at the place where robots are used most frequently today: factories. Some 4 per cent of US factories have "major robotics accidents" every year. And remember: these are robots that aren't designed to kill.
Robots find it almost impossible to distinguish an apple from a tomato: how will they distinguish a combatant from a civilian? You can't appeal to a robot for mercy; you can't activate its empathy.
If virtually no American forces had died in Vietnam, would the war have stopped when it did – or would the systematic slaughter of the Vietnamese people have continued for many more years?
There is some evidence that warbots will also make us less inhibited in our killing.
Originally posted by LiveForever8
I love the idea of 'robots', having grown up watching Terminator and being a fan of sci-fi, but there are certain aspects that do worry me. The idea that these robots will make their own decision and fire at targets they 'choose' leads me to believe we aren't far away from being on the receiving end of some sci-fi film plot ourselves.
Is there really any danger of these robots running amok?