It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


Lethal Autonomy - Should the 'decision to kill' belong to a robot?

page: 1

log in


posted on Oct, 13 2013 @ 11:07 AM

Daniel Suarez covers a few inevitable issues the world will face as the future landscape of war further develops into the increasing use of robotics. Soon, it seems a portion of the military will consist of robots roaming the war zone. Robots designed with the ability to make "the kill decision" on their own. Thus, the ultimate question becoming, 'should we be designing conscience robots with the ability to make a kill decision?'

Daniel discusses the real possibility of an 'anonymous war' involving new age warlords. That's a scary thought, no one knowing who is responsible for sending a robot to kill someone - this scenario not taking place on the battlefield, but in a public street or restaurant. But this possibility seems much further away compared to the military projects currently in R&D.

What Daniel is really referring to is the questions regarding ethics, accountability, plausible deniability at the international level. At the end of the video, he says 1) we need an international treaty on robotic weapons & 2) we need transparency, as "no robot should have an expectation of privacy in a public place."

Ronald C. Arkin and Lilia Moshkina also addressed this in their paper entitled Lethality and Autonomous Robots: An Ethical Stance, in which they also discuss an on-going survey which is meant to discover 1) prior knowledge and attitudes, 2) questions regarding the terms of acceptance and ethical issues and 3) demographics. Results discussed HERE

It's hard for me to envision an entire army of robots fighting our wars or an entire fleet used to control situations involving the public, i.e., riots/protests. But what I can envision if conscience robots are to become a reality, is those conscience robots coming to the conclusion that we humans are "unethical". At that point, wouldn't conscience robots be forced to 'save us from ourselves'?

Dear conscience robot designers,

Let's not forget to include that "On/Off" switch, K?


posted on Oct, 13 2013 @ 11:31 AM
Here's the deal. No matter the ethics involved, these gears are already turning and are definitely not at a point of ending anytime soon. Especially when it comes to any black ops. At this point with how civilian tech is advancing, top secret military tech must be getting tremendously more advanced. I like to remember that any government's military is surley secretly working on all sorts of majorly advanced technologies. I would not be surprised if top secret ops are 25-50 years more advanced than civilians.

This does really start to delve deep into some serious issues concerning the exponential deluge of technology.

These autonomous robots will surely exist in the not too distant future and I would be willing to bet the farm many will be lethal. Just like all other forms of technology we will adapt. I do find the influx of drone like RC vehicles within society intriguing. Was just reading another thread where the topic was a photo taken of a drone RC(small helicopter) above a person's house. Great thread Op, S&F.

edit on 10/13/2013 by mcx1942 because: add

posted on Oct, 13 2013 @ 01:32 PM
Firstly, I am not yet convinced that a self-conscious robot is possible. I think this will be achieved in a slightly different way, with robots having learning algorithms which will give it it's own unique personality and bias. I am still unsure if it will be able to operate independently like a human and suddenly 'wake up' to its situation. We don't yet understand how our own consciousness arises and its connection to our brain let alone how a robots mind would operate.

Really and truly we should have a duty to make sure no advanced AI robot can contemplate harming a human. However, this is meaningless in the long term, because robots WILL be used to kill and to fight wars and to enforce any other actions in general. You know what humans are like - anything we discover or invent or use will have good and bad applications. Even if we establish regulations and laws to not allow robots to make lethal decisions, some maverick or rebel group or terrorist faction will use them for this purpose anyway.

I personally don't think we should ever give a robot access to that line of 'thought'/command. The choice to take another humans life is too complex in opinion to be left for a cold logic processing machine.

I love robots in general though and spent a few projects at Uni working with them. In theory they should be one of our greatest inventions - a living organism from inorganic methods. But instead they will be a slave race, and depending how advanced AI can truly go, they may even attempt rebellions. We can't really say right now.

What I can say is this is not a good path right now in our history. We are heading scarily towards a world where those with power will have nano implants, biological improvements, and robotic armies at their disposal. The divide between the rich and poor will eventually become the divide between the old mortal humans and the new transhumance cyborgs.

Something tells me we as Humans are in some way not necessarily meant to reach this stage of technology right now. We are nowhere near a stable unified race which is one you'd expect to need to implement these future technologies safely.
edit on 13-10-2013 by DazDaKing because: (no reason given)

posted on Oct, 13 2013 @ 01:35 PM
Oops - Double post
edit on 13-10-2013 by DazDaKing because: (no reason given)

posted on Oct, 13 2013 @ 01:36 PM
Oops - double post.

posted on Oct, 13 2013 @ 01:56 PM
The only thing I know we do have are some "freefire zone" gadgets that if something moves they explode or fire off a round which will seek out a heat source. They are defined as active denial weapons.. Used to be we dropped sensors in an area that could triangulate and give a rough position of movement. An artillery base would fire for effect on the position indicated. Many deer and monkeys evidently did not get the memo of their habitat being designated a "free fire zone" if the body counts were an indication.

There will be much more autonomous functioning to reduce costs and free police officers for more human tasks. Humanoid walking robots would be more in use for crowd control at games, strikes and riots. Robots will patrol city centres and trouble spots where fights are likely to break out. Robots will have reasonable speech perception and be able to ask questions and respond to answers.

posted on Oct, 13 2013 @ 02:02 PM
I find it amazing that people feel they have the right to kill. I mean if you think about it, the death penalty is like some kind of gift: choice of last meal and the "putting out of misery" itself. life in prison would be much more suffering for a lot of death row inmates, i'd guess.
they way I see it, killing methodically first appeared in farming. obviously there is huge killing of plants in farming. next came animal husbandry, ad there's a lot of killing in that too. but then somebody decided to apply these principles to humans and that's were things have gone awry. to even consider robots having the ability to kill.....there's something pretty surreal about that, don't you think?

posted on Oct, 13 2013 @ 02:06 PM
reply to post by six67seven

Aren't we robots in a sense? And we still have the decision to kill. How do we decide that?

posted on Oct, 13 2013 @ 02:28 PM
for us to have safe robots in general society they will have to obey certain laws pretty similar to what Isaac Asimov wrote in his 3 basic laws of robotics and the taking of such machines into warfare will render them pretty useless as all you need is to hint there's a single human in the attack area and the 1st law of robotics will cause them to refuse to carry out the orders but if there's no humans to be hurt then robot v's robot wars are perfectly envision able and acceptable

posted on Oct, 19 2013 @ 01:34 PM
Ha ha ha! Everybody laughed when I told them I acquired cases automatic armor piercing incendiary rounds for home defense...Who's laughing now?

posted on Oct, 19 2013 @ 01:46 PM
reply to post by six67seven

But what I can envision if conscience robots are to become a reality, is those conscience robots coming to the conclusion that we humans are "unethical". At that point, wouldn't conscience robots be forced to 'save us from ourselves'?

Sounds like a Twight Zone episode I watched once.

It turned out really bad for humans....

top topics


log in