Daniel Suarez covers a few inevitable issues the world will face as the future landscape of war further develops into the increasing use of robotics.
Soon, it seems a portion of the military will consist of robots roaming the war zone. Robots designed with the ability to make "the kill decision"
on their own. Thus, the ultimate question becoming, 'should we be designing conscience robots with the ability to make a kill decision?'
Daniel discusses the real possibility of an 'anonymous war' involving new age warlords. That's a scary thought, no one knowing who is responsible
for sending a robot to kill someone - this scenario not taking place on the battlefield, but in a public street or restaurant. But this possibility
seems much further away compared to the military projects currently in R&D.
What Daniel is really referring to is the questions regarding ethics, accountability, plausible deniability at the international level. At the end of
the video, he says 1) we need an international treaty on robotic weapons & 2) we need transparency, as "no robot should have an expectation of
privacy in a public place."
Ronald C. Arkin and Lilia Moshkina also addressed this in their paper entitled Lethality and Autonomous Robots: An Ethical Stance, in which they also discuss an on-going survey which is meant to discover 1) prior knowledge
and attitudes, 2) questions regarding the terms of acceptance and ethical issues and 3) demographics. Results discussed
It's hard for me to envision an entire army of robots fighting our wars or an entire fleet used to control situations involving the public, i.e.,
riots/protests. But what I can envision if conscience robots are to become a reality, is those conscience robots coming to the conclusion that we
humans are "unethical". At that point, wouldn't conscience robots be forced to 'save us from ourselves'?
Dear conscience robot designers,
Let's not forget to include that "On/Off" switch, K?
Here's the deal. No matter the ethics involved, these gears are already turning and are definitely not at a point of ending anytime soon. Especially
when it comes to any black ops. At this point with how civilian tech is advancing, top secret military tech must be getting tremendously more
advanced. I like to remember that any government's military is surley secretly working on all sorts of majorly advanced technologies. I would not be
surprised if top secret ops are 25-50 years more advanced than civilians.
This does really start to delve deep into some serious issues concerning the exponential deluge of technology.
These autonomous robots will surely exist in the not too distant future and I would be willing to bet the farm many will be lethal. Just like all
other forms of technology we will adapt. I do find the influx of drone like RC vehicles within society intriguing. Was just reading another thread
where the topic was a photo taken of a drone RC(small helicopter) above a person's house. Great thread Op, S&F.
Firstly, I am not yet convinced that a self-conscious robot is possible. I think this will be achieved in a slightly different way, with robots having
learning algorithms which will give it it's own unique personality and bias. I am still unsure if it will be able to operate independently like a
human and suddenly 'wake up' to its situation. We don't yet understand how our own consciousness arises and its connection to our brain let alone how
a robots mind would operate.
Really and truly we should have a duty to make sure no advanced AI robot can contemplate harming a human. However, this is meaningless in the long
term, because robots WILL be used to kill and to fight wars and to enforce any other actions in general. You know what humans are like - anything we
discover or invent or use will have good and bad applications. Even if we establish regulations and laws to not allow robots to make lethal decisions,
some maverick or rebel group or terrorist faction will use them for this purpose anyway.
I personally don't think we should ever give a robot access to that line of 'thought'/command. The choice to take another humans life is too complex
in opinion to be left for a cold logic processing machine.
I love robots in general though and spent a few projects at Uni working with them. In theory they should be one of our greatest inventions - a living
organism from inorganic methods. But instead they will be a slave race, and depending how advanced AI can truly go, they may even attempt rebellions.
We can't really say right now.
What I can say is this is not a good path right now in our history. We are heading scarily towards a world where those with power will have nano
implants, biological improvements, and robotic armies at their disposal. The divide between the rich and poor will eventually become the divide
between the old mortal humans and the new transhumance cyborgs.
Something tells me we as Humans are in some way not necessarily meant to reach this stage of technology right now. We are nowhere near a stable
unified race which is one you'd expect to need to implement these future technologies safely.
edit on 13-10-2013 by DazDaKing because: (no
The only thing I know we do have are some "freefire zone" gadgets that if something moves they explode or fire off a round which will seek out a
heat source. They are defined as active denial weapons.. Used to be we dropped sensors in an area that could triangulate and give a rough position of
movement. An artillery base would fire for effect on the position indicated. Many deer and monkeys evidently did not get the memo of their habitat
being designated a "free fire zone" if the body counts were an indication.
There will be much more autonomous functioning to reduce costs and free police officers for more human tasks. Humanoid walking robots would be
more in use for crowd control at games, strikes and riots. Robots will patrol city centres and trouble spots where fights are likely to break out.
Robots will have reasonable speech perception and be able to ask questions and respond to answers.
I find it amazing that people feel they have the right to kill. I mean if you think about it, the death penalty is like some kind of gift:
choice of last meal and the "putting out of misery" itself. life in prison would be much more suffering for a lot of death row inmates, i'd
they way I see it, killing methodically first appeared in farming. obviously there is huge killing of plants in farming. next came animal
husbandry, ad there's a lot of killing in that too. but then somebody decided to apply these principles to humans and that's were things
have gone awry. to even consider robots having the ability to kill.....there's something pretty surreal about that, don't you think?
for us to have safe robots in general society they will have to obey certain laws pretty similar to what Isaac Asimov wrote in his 3 basic laws of
robotics and the taking of such machines into warfare will render them pretty useless as all you need is to hint there's a single human in the attack
area and the 1st law of robotics will cause them to refuse to carry out the orders but if there's no humans to be hurt then robot v's robot wars are
perfectly envision able and acceptable
But what I can envision if conscience robots are to become a reality, is those conscience robots coming to the conclusion that we humans are
"unethical". At that point, wouldn't conscience robots be forced to 'save us from ourselves'?
This content community relies on user-generated content from our member contributors. The opinions of our members are not those of site ownership who maintains strict editorial agnosticism and simply provides a collaborative venue for free expression.