It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Why are we teaching robots how to teach themselves to kill us?

page: 1
4

log in

join
share:

posted on Oct, 19 2010 @ 07:41 PM
link   
I Cub robot learns to fire bow and arrow into bullseye in 8 shots!

The link above will take you to the article and a Youtube video (I can't embed to save my life so here's the link guys)

In this link you will see a humanoid type robot that is programmed with an algorithim they call ARCHER

Augmented Reward Chained Regression

After coding in the basic ability to hold the bow and etc the bot then deductivelly figured out how to effectively use the bow and arrow to Hit the bullseye in 8 tries.... 8!!! I did archery in grade school the average human is lucky to get proficient in eight thousand!!!

as the comments in the site so succintly put it We're all gonna die!!



posted on Oct, 19 2010 @ 08:32 PM
link   
reply to post by roguetechie
 


The obvious answer to the question "Why are we teaching robots to teach themselves to kill us?" is, in order to kill us.

I'm not being flippant with that either.

Prepare to die.



posted on Oct, 19 2010 @ 08:51 PM
link   
Gotta have rules. It all falls apart otherwise.

The Three Laws of Robotics are as follows:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.



posted on Oct, 19 2010 @ 09:01 PM
link   
LOL I know it's sad when the answer to a question is that simple and that frightening... But then again we live in a country that produces thousands of powerpoints a year within our armed forces about shortening and automating
"THE KILL CHAIN"

I mean as a geek I love tech and the idea of a robot I don't have to program and that gets better at things as it goes but.... teaching robots to learn and they teaching them to kill and then on top of that teaching them to preserve themselves (which will be somewhere soon that all of these get combined) then add in more and more complex and competent swarming and data sharing techniques and the potential is just not good for some serious SKYNET carnage.

note: and some will say bah that won't happen anytime soon but I say do we really know when it will happen? When you combine moore;s law with ever more sophisticated evolutionary algorithims we simply don't know what the thin red line is that beyond which there is a crucial complexity level where things stop being our creations and start deciding they don't want to do what we tell them to. And hey with gun control and NANNY state nazi's progressively disarming the public and anyone else they can (except their bodyguards and enforcers) the robots will probably be the only ones with the weapons that day... LOL oh between politicians and armed robots we're so DEAD



posted on Oct, 20 2010 @ 12:26 AM
link   
We will have an answer to that question when Skynet becomes operational. I for one would like to welcome our new robotic overlords of benevolence and all knowledge.

The other answer would be so we can continue to kill each other, but we would then be able to blame the robots instead of ourselves.



posted on Oct, 20 2010 @ 12:40 AM
link   
I think that by the time robots ever decide to kill humans, bows and arrows and even guns for that matter will be obsolete. All they need to do is create some self-replicating nanobots and send them out to infect humans. Humanity won't know what hit it.



posted on Oct, 20 2010 @ 02:02 AM
link   
Why is it that people have so little faith in what our current technology can do?

this question is in response to the comment about by the time robots decide to kill us guns etc will be obsolete...

To clarify my point I'm not talking the mythical singularity (the one that people might want to start lobbying congress to put the brakes on Attempts to bring it on ASAP) There is an entire superstructure of parallel and individual efforts to spawn a skynet type self aware AI with beyond human intelligence. Then there is the PDF's and articles in various university tech news feeds and etcetera from early 2000 and the years around then talking about e-life (e-life by the definition commonly agreed on at the time were pieces of code from virus and other sources that had for lack of a better term served their purpose and got loose into the net and were evolving on their own and surviving) All of the articles and Pdf's that existed at one point and all reference as far as I've been able to find has been purged from the internet, or the parts of it I can get to. And that's a thread in itself, but not what I'm talking about.

What I'm talking about is the same kind of intelligence melded with deadly intent that a hobo spider or a shark pack possess. I'm talking about Weaponry based systems with just enough programming to be trained to do everything in their power to continue to function.... Now these systems (most likely a semi autonomous ecosystem or hive type SUITE of robots designed to keep each other running and pursue a moderately complex military tactical doctrine in the attempt to accomplish a mission.) I'm envisioning something along the lines of the FCS programs robotics suite all bundled into one with recon repair resupply attack and other systems packaged as a complete working group.

Now imagine a "veteran cadre" of systems such as this that have been upgraded and "learned" in the field to monitor commo and gather heuristic intelligence... Now imagine there is an article on a website like fas.org declaring the stand down and decomission of this cadre. Cadre sees a threat and while not bearing any malice (similar to killer bees) THEY FIGHT TO THE DEATH to avoid being taken out of service which would violate their programming to survive.

This is the threat I worry about.



posted on Oct, 20 2010 @ 07:52 AM
link   

Originally posted by roguetechie
I did archery in grade school the average human is lucky to get proficient in eight thousand!!!

At 3.5 metres from the target?

I have only one question: what happens if they move the target 0.5 metres closer?



posted on Oct, 20 2010 @ 08:54 AM
link   
reply to post by roguetechie
 


Technology and advancement. I always wondered if we are in a paradox (past s the future, future is the past). As we advance, we seem to be getting a bit more stupid (imo). When we look "back into our history" we see evidence that advanced technology was used. Technology that even today we don't have.

Our ancestors seemed to know a great deal more about astronomy, the planet, magnetic fields, etc etc. while at the same time the culture was advanced, and the people appeared to lead simple lives (slower pace of life etc), and almost everything they did had a purpose that fit into their culture very well.

If we aren't in a paradox, then I would like to vote for Bender from futurama.



posted on Oct, 20 2010 @ 02:11 PM
link   

Originally posted by snowspirit
Gotta have rules. It all falls apart otherwise.

The Three Laws of Robotics are as follows:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The only problem with that is, none of the computer programming schools have banned Victor Von Doom from attending, so as soon as he figures out how to reprogram the robots for his evil purposes, we are doomed



posted on Oct, 20 2010 @ 02:20 PM
link   

Originally posted by ArMaP
I have only one question: what happens if they move the target 0.5 metres closer?
I don't see any reason you can't put some kind of distance detection technology (maybe a laser rangefinder?) on the robot and get it to make adjustments based on the distance.

blacklightarrow.wordpress.com...


In February 2007, the US Defense Advanced Research Projects Agency (DARPA) announced that it was looking for a system that would detect all the elements within a sniper’s environment, collate it and produce a point of aim that would result in a cold bore (first shot) kill. In other words, they wanted a system that would read the range to target, the target’s movement, measure the speed and distance of the wind, gauge the humidity and temperature of the air, take into account the angle to the target, the altitude of the shooter, the power, calibre and weight of the bullet, and give the sniper a point to aim at.

Three and a half years on, DARPA has announced that it has selected Lockheed Martin to go ahead with the second phase of the development of the One Shot System. At the end of the first phase, DARPA tested Lockheed Martin’s system, and in spite of certain failures and shortcomings, announced that it was satisfied enough to award the $6.9 million for further development.

Lockheed Martin’s system consists of an off-the-shelf spotting scope coupled to a dedicated rifle scope. The spotting scope uses a laser to measure range and angle to target, while the externally-attached “magic box” simultaneously measures wind, air pressure, temperature and humidity. These calculations would then be analysed and fed to the rifle scope, presenting the solution as a red cross in the scope’s reticle. Presumably, the sniper would then move his rifle until his crosshairs mated with the red cross or alternately the red cross was laid on the target.
Replacing the human sniper with a robot sniper using such advanced technology is not unthinkable. Snipers have to take shots in-between heartbeats so their heartbeat doesn't affect their accuracy, so right off the bat any robot that doesn't have a heartbeat has an advantage because it doesn't have to shoot between heartbeats.



posted on Oct, 20 2010 @ 03:19 PM
link   

Originally posted by Arbitrageur

Originally posted by snowspirit
Gotta have rules. It all falls apart otherwise.

The Three Laws of Robotics are as follows:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The only problem with that is, none of the computer programming schools have banned Victor Von Doom from attending, so as soon as he figures out how to reprogram the robots for his evil purposes, we are doomed


Damn those evil villains.

Doomed by Victor Von Doom


They always said technology would be the end of us.



posted on Oct, 20 2010 @ 04:23 PM
link   
I remember when the idea of a robot doing something like this was PURE SCI FI.... less than a DECADE AGO!

Darpa in the early part of this decade had several of their darpa challenge prizes devoted to things such as this... I remember the first winner to develop a learning algorithim for a simple task... the entire company disappeared within a couple months of the announcement of them winning the prize!!

Now it's an everyday occurrence to see a robot teaching itself a new task! People don't even consider it remarkable.

This brings me to my next point when it comes to software and especially now that we have software that optimizes software and can rewrite itself the pace of progress has ceased being linear and become GEOMETRIC!

And with the level of hardware that comes out every year far outstripping our ability as humans to build complex enough software to really tax it's requirements (other than very specialized areas this is the case.. You can now play videogames while building a document and getting gps DIRECTIONS from your cellphone and still not max out it's ability to process) These self optimizing algorithims and evolutionary technologies have AMPLE space and capability to grow into.

I feel on a gut level that we humans have no idea what the hardware we already have can do let alone the state of the art that pushes forward every single day. We are about to witness something that could be good or could be bad but whatever it is it's for sure going to be an eye opener as our digital life forms we've built in labs begin to really start doing self optimization etc.

What we've seen already including algorithims that teach robots to lie and the fact that robotics technology is being leveraged mostly for destructive purposes gives me very little hope in a 3 laws type ability for us to keep our creations from being able to hurt us.



posted on Oct, 20 2010 @ 04:29 PM
link   
Since we lost the Civil War the only thing you see our Federal Government fund is something that kills, helps to kill, or helps the helper kill.

Are we funding a robot to clean up the Pacific Garbage Patch (floating trash island big as the United States)???
Are we funding a robot to explore the bottom of the ocean?
Are we funding a robot to sit down and crank a generator to power our houses?

If it ain't fer killin' it's not Government funded.



posted on Oct, 20 2010 @ 07:44 PM
link   
reply to post by Pervius
 
Well it might have something to do with the US constitution.

It says something about providing a common defense, which I guess can involve killing people.

"We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America."

I don't remember anything in the constitution about floating islands.

You do raise a good point about where our priorities lie.



new topics

top topics



 
4

log in

join