It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


slaughter bots

page: 1

log in


posted on Jan, 22 2020 @ 11:44 PM
video source - combines fiction with oped

obligitory summary :

the development of cheap AI driven autonomous assasin " drones " - that can tartet individuals - even cheep enough to individually tartget a demographic within an environment

nothing new - " count zero " had the slam hound in 1986

but an interesting present

the carrier y-tube chanel has some very good stuffs

posted on Jan, 23 2020 @ 02:37 AM
a reply to: ignorant_ape

I have seen this video before

no matter how many times I see it It never fails to scare the crap out of me.
not as say a techno horror movie does... but because it is real or damn near real life.

no matter how one tires to rationalize or ignore it away, if you have any logic you know you cant.

the military ALREADY HAS autonomous weapons they admit can make kill decisions on their own.
their claims of "final human decision" rings hollow and I suspect only when the cameras are on.

if they already have this tech the idea of miniaturization is already being developed.

along with scientists every day trying to develop and improve AI .

any tech we develop would be developed or stolen by our enemies... not if but when....

I hope people realize that this video is a warning and wake up call......

sometimes horror sci fi isnt a theme, but a warning...


posted on Jan, 23 2020 @ 02:56 AM
Autonomous kill weaponry is against Geneva conventions though. Calm thine breasticles.

posted on Jan, 23 2020 @ 03:22 AM
a reply to: Archivalist

sigh - " expanding projectiles " or " hollowpoint // dum-dum " projectiles - are banned by the geneva convention

but in some juristictions - are actually the preffered police load

so - unless you is at war with a state actor - the geneva convention is irrelevant

posted on Jan, 23 2020 @ 03:27 AM

originally posted by: Archivalist
Autonomous kill weaponry is against Geneva conventions though. Calm thine breasticles.

so is lots of things.

but unless the victors want to punish those who broke it , its just words on paper.

sadly this genie is out of the bottle , that paper will not stop it


posted on Jan, 27 2020 @ 12:35 AM
Semi Autonomous vehicles are not, instead of having one person control one drone, just have them control a swarm. It might be stretching the rules but at some point it is coming.

a reply to: scrounger

posted on Jan, 28 2020 @ 12:59 AM

originally posted by: CobaltCPD

Semi Autonomous vehicles are not, instead of having one person control one drone, just have them control a swarm. It might be stretching the rules but at some point it is coming.

a reply to: scrounger

interesting idea and has some practical merit.

But the reality/practicality is that controlling a "swarm" would require an autonomous target elimination/killing .

the idea of one drone control it is more practical direct kill/no kill control (do I think the military always does this, NO but thats another discussion) .

for a swarm to be practical (be single and especially multiple targets) each could not have a kill/no kill control .

just as the video suggests to be effective it is given a set of parameters for a target/targets and left to complete the mission .

as the example shows from breaching the obstructions (protection) , target seeking, selecting, attacking and determining of termination was achieved or another attack needed.

either way how may "geneva violations" go on DAILY that are not enforced?

this would be (IMO already) violated.


posted on Jan, 28 2020 @ 01:30 AM
there are three areas why fully autonomous weapons are bad is this

1. practical.....
anything autonomous is computer controlled. Be it standard algorithms or AI (more on this in another point).
Anything so controlled is subject to hacking, control error , or mechanical "murphy law" faults. You would find out about it well too late and maybe be able (due to no direct control) stop it. Even if discovered early enough.
Along with a human (with all our faults) can use reasoning , experience and even change of priority/targets/orders better than an pre programmed system.

2. futuristic threats .... AI

one thing the terminator first movie pointed out is that the AI became self aware and when the people (understandably and realistic action ) wanted to "shut it off" felt threatened... it defended itself (as any organism that wanted to "live" would) by lethal force...

this would be one of may outcomes that a self aware AI could take.. others are insanity , evil intent, insecurity , interpreting its "orders/directives " (I robot and rules of robotics ring a bell) different than humans wanted, ect.
all of which "organic" intelligence (humans) do so why would AI which goal is to recreate that but with mechanical parts be any different.

hell we dont understand our own "intelligence" , predict our own behaviors (be evil, insane , become self aware exactly ect) , cure our own illnesses, or hell even control evil people from going off the rails.

but we can predict when a AI goes sentient , what its intentions are , and "shut it down"?


3. making war too easy and "neat"

the reason conflicts dont go "global" (for lack of a better term) often is that people not only see the horrors of war (all aspects) , but also have to be the ones to "kill" the enemy with the inevitable collateral damage. They (along with the people who survive war) KNOW what dealing death is and its outcomes.

this plays a role in trying to not only stop the current war they are in (be negotiations , elimination of the enemy, or combination of both ) but how they fight (prevent war crimes as best they can, limit destruction, what weapons to use/not use, ect) and most importantly if they should engage in another one.

when one becomes too automated the killing becomes easier, its easier to justify because A. its just machines and B. you convince yourself its just/will be limited to "the bad guys".
if your evil intent (ex terrorists or fascists like nazis ) then you have all the effect and no personal risk.
but worst is it becomes easier to start, easier to accept, and damn hard to end.

there are two great examples of this

one is in the book "level 7".
the other is the star trek episode where they took automated war to its extreme (but realistic situation) "taste of Armageddon.

look I am not saying we should limit the risk our brave military people fight wars.
life is precious and if war is needed then ways to help us limit our losses is good.

but there has to be a LINE IN THE SAND (sorry no better description) as to how automated we go.
just like we limit what weapons (ex no going nuclear or chemical as standard options) and tactics (total area annihilation) we use in conflict unless no other choice.

the sad part in all decisions is this tech is already out there and being improved on.


posted on Jan, 28 2020 @ 05:45 PM

originally posted by: scrounger
the sad part in all decisions is this tech is already out there and being improved on.

Well, that's where the money is. I forget who on ATS first said it, but it really seems like all war is these days is a way for the various militaries of the world to use up their old war machines and make new ones, either to sell or use. War is not about acquiring land or resources anymore because you don't want to blow up potential customers. It's all about who can break more of the other sides' toys so they can get more money to make new ones.

It sucks to have a military-driven economy, but that's where we are. That's why we have all this cool consumer electronic crap to placate and distract us. Constant and unending war.

posted on Jan, 28 2020 @ 06:42 PM
Send in the bots to eradicate coronavirus!

And anybody with coronavirus.

And anything that moves!!

new topics

top topics


log in