It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Killbots Are Your Friends

page: 1
0

log in

join
share:

posted on Aug, 3 2009 @ 01:35 AM
link   
August 2, 2009: The U.S. Air Force recently released a report (Unmanned Aircraft Systems Flight Plan 2009-2047) in which they predicted the eventual availability of flight control software that would enable UAVs to seek out and attack targets without human intervention. This alarms many people, who don't realize that this kind of software has been in service for decades.

It all began towards the end of World War II, when "smart torpedoes" first appeared. These weapons had sensors that homed in on sound of surface ships. Another type detected the wake of a ship, and followed until it's magnetic fuze detected that it was underneath the ship, and detonated the warhead. The acoustic homing torpedoes saw use before the war ended, but the wake homers were perfected and put into service (by Russia) after the war ended.

Another post-war development was the "smart mine." This was a naval mine that lay on the bottom, in shallow coastal waters. The mine has sensors that detect noise, pressure and metal. With these three sensors, the mine can be programmed to only detonate when certain types of ships pass overhead. Thus with both the smart mines and torpedoes, once you deploy them, the weapons are on their own, to seek out and destroy a target. These weapons were not alarming to the general public, but aircraft that do the same thing are.

However, smart airborne weapons have also been in use for decades. The most common is the cruise missile, which is given a target location, and then flies off to find and destroy the target. Again, not too scary. But a UAV that uses the same technology as smart mines (sensors that find and software that selects, a target to attack) is alarming. What scares people is that they don't trust software. Given the experience most of us have with software, that's a reasonable fear.

But the military operates in a unique environment. Death is an ever-present danger. Friendly fire occurs far more than people realize (or even the military will admit). Combat troops were reluctant to talk about friendly fire (mainly because of guilt and PTSD), even among themselves, and the military had a hard time collecting data on the subject. After making a considerable effort (several times after World War II), it was concluded that up to 20 percent of American casualties were from friendly fire. So military people and civilians have a different attitude towards robotic killing machines. If these smart UAVs bring victory more quickly, then fewer friendly troops will be killed (by friendly or hostile fire). Civilians are more concerned about the unintentional death of civilians, or friendly troops. Civilians don't appreciate, as much as the troops do, the need to use "maximum violence" (a military term) to win the battle as quickly as possible.

The air force has good reason to believe that they can develop reliable software for autonomous armed UAVs. The air force, and the aviation industry in general, has already developed highly complex, and reliable software for operating aircraft. For example, there has been automatic landing software in use for over a decade. Flight control software handles many more mundane functions, like dealing with common in-flight problems. This kind of software makes it possible for difficult (impossible, in the case of the F-117) to fly military aircraft to be controlled by a pilot. Weapons guidance systems have long used target recognition systems that work with a pattern recognition library that enables many different targets to be identified, and certain ones to be attacked. To air force developers, autonomous armed UAVs that can be trusted to kill enemy troops, and not civilians or friendly ones, are not extraordinary, but the next stop in a long line of software developments.



posted on Aug, 3 2009 @ 07:21 AM
link   
So . . . . What Are People Afraid Of ? That Something Like The Matrix Will Happen ? As Long As These Machines Can't Choose Their Own Targets I Aint Worried .



posted on Aug, 3 2009 @ 07:40 AM
link   
I have two problems with trusting machines. One, a machine is only as good as the information fed into it either in real-time or at production. I've seen way too much bad code in my short life at varying levels of importance to expect any sort of infallibility here.

Second, shorts happen all the time with hardware and security breeches happen all the time with software. Last thing we need is some angsty 12 year old hacking into a bunch of war machines.

Anybody see these "corpse eating" robots yet?

The article has been changed to say they'll only consume plant matter but fuel from animal matter is still an option.

Your Os suffers catastrophic failure you're computer crashes. Your OS gets a worm some clown can see your MySpace password. Your killer flesh-eating robots crash or get infected and hopefully they just cease to function.



posted on Aug, 4 2009 @ 12:08 AM
link   
you know I had no idea that they were building prototype robots powered by eating flesh. Do you have the original descriptions where they talk about eating animal flesh?

So now a robot can hunt us kill us and eat us

nice

we build our own predators



posted on Aug, 4 2009 @ 07:22 PM
link   

Originally posted by Golden Generic
So now a robot can hunt us kill us and eat us

nice

we build our own predators



The Human Race Has To Go Sometime . I Wouldn't Be Surprised If Its By Our Own Creations



posted on Aug, 11 2009 @ 11:47 AM
link   
in 1975 I designes and buildt my first robot useing two 10 speed tires , a bent peice of 1/4-20 threaded rod , two bipolar stepper motors , a motor cycle battery and more DtDp relays than I can remember.

frankly speaking , I would say I understand the subject .
the problem is not the machines...
its the operators .

the joke is...
you have Ph'd and billions of dollors in design , master craftsman with masters degrees and millions of dollors building them , collage kids with collage degrees and 100,ooo.oo of training flying them .... and high school drop outs maintaining them .

the next war could easily use warriors that use x-box controller
for weapons .

the real problem is simplicity and numbers . the people who are building robots today are all about complex , and don't know tactic any more than mechanics .
it is a testiment to the men and women of our military that their weps work at all . the first UAV was Isreali... a remotly controlled airplane... our military has resisted robotics from its first introduction... and still does...

they resisted aircraft carriers also , and aircraft ...

computers are not the problem here... or programming .

military thinking ls... you want to have the right to kill your own .
and if the soldier never leaves the united states... drives a robot by satilight on the other side of the world , and the robot is dropped out of the back side of a high flying cargo plane... there is something wrong with that .

you take land with infantry , lose the war with out control of the air .

well get over it . you can't kill an army that doesn't show up.
remove the head enough times... and no body will want the job .

who cares about the soldiers.. I want to kill the guy that gives the orders... let the soldiers go home and work their farms , sell shoes and work on cars... I want the jerk that gave them a uniform.... and robots are a walk up and shoot just one guy kinda wep that can make that happen ...

tell me where the jerk is... drop a "rifle on a robot" in and lets get it on .
but don't start telling me you want to limit the war and keep our boys out of cambodia , no fly zones , honorable conduct of the war .
the simple fact is , unlimited bad guys , get unlimited weps used against them .

or...
do unto others as they do unto you .

you make my life hard.... I will make your life short !




top topics
 
0

log in

join