posted on Jan, 28 2020 @ 01:30 AM
there are three areas why fully autonomous weapons are bad is this
anything autonomous is computer controlled. Be it standard algorithms or AI (more on this in another point).
Anything so controlled is subject to hacking, control error , or mechanical "murphy law" faults. You would find out about it well too late and
maybe be able (due to no direct control) stop it. Even if discovered early enough.
Along with a human (with all our faults) can use reasoning , experience and even change of priority/targets/orders better than an pre programmed
2. futuristic threats .... AI
one thing the terminator first movie pointed out is that the AI became self aware and when the people (understandably and realistic action ) wanted to
"shut it off" felt threatened... it defended itself (as any organism that wanted to "live" would) by lethal force...
this would be one of may outcomes that a self aware AI could take.. others are insanity , evil intent, insecurity , interpreting its
"orders/directives " (I robot and rules of robotics ring a bell) different than humans wanted, ect.
all of which "organic" intelligence (humans) do so why would AI which goal is to recreate that but with mechanical parts be any different.
hell we dont understand our own "intelligence" , predict our own behaviors (be evil, insane , become self aware exactly ect) , cure our own
illnesses, or hell even control evil people from going off the rails.
but we can predict when a AI goes sentient , what its intentions are , and "shut it down"?
3. making war too easy and "neat"
the reason conflicts dont go "global" (for lack of a better term) often is that people not only see the horrors of war (all aspects) , but also have
to be the ones to "kill" the enemy with the inevitable collateral damage. They (along with the people who survive war) KNOW what dealing death is and
this plays a role in trying to not only stop the current war they are in (be negotiations , elimination of the enemy, or combination of both ) but how
they fight (prevent war crimes as best they can, limit destruction, what weapons to use/not use, ect) and most importantly if they should engage in
when one becomes too automated the killing becomes easier, its easier to justify because A. its just machines and B. you convince yourself its
just/will be limited to "the bad guys".
if your evil intent (ex terrorists or fascists like nazis ) then you have all the effect and no personal risk.
but worst is it becomes easier to start, easier to accept, and damn hard to end.
there are two great examples of this
one is in the book "level 7".
the other is the star trek episode where they took automated war to its extreme (but realistic situation) "taste of Armageddon.
look I am not saying we should limit the risk our brave military people fight wars.
life is precious and if war is needed then ways to help us limit our losses is good.
but there has to be a LINE IN THE SAND (sorry no better description) as to how automated we go.
just like we limit what weapons (ex no going nuclear or chemical as standard options) and tactics (total area annihilation) we use in conflict unless
no other choice.
the sad part in all decisions is this tech is already out there and being improved on.