It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

US military to develop 'ethical' robots

page: 1
0

log in

join
share:

posted on Dec, 1 2008 @ 01:27 AM
link   
Sinks $4 billion into scheme

By Aharon Etengoff in San Francisco @ Monday, December 01, 2008 6:31 AM

The US military is reportedly planning to sink approximately $4 billion into the development of "ethical" robots.

Colin Allen, a British robotics expert at Indiana University, has been asked by the US Navy to advise them on building robots that do not violate the Geneva Conventions.

"The question they want answered is whether we can build automated weapons that would conform to the laws of war. Can we use ethical theory to help design these machines?" Allen told the Telegraph.

Although Allen later claimed the Telegraph had "greatly exaggerated" the scope of his role on the project, he conceeded that the story "successfully captured why we need to be thinking about these ethical issues now".

According to Ronald Arkin of the Georgia Institute of Technology, robots were capable of performing "more ethically than human soldiers." Arkin explained that robots could easily function without negative emotions that adversely affect human battlefield judgement.

It should be noted that advanced military drones and robots are currently controlled remotely by human handlers. However, researchers are attempting to develop software "soldier bots" capable of automatically identifying targets and weapons. In addition, the bots would be designed to distinguish between tanks, armed men, ambulances and civilians.

Nevertheless, Noel Sharkey of Sheffield University has expressed concern over taking humans completely out of the loop. According to Sharkey, the concept of a robot granted the authority to decide whether or not to kill an individual was "terrifying". X



posted on Dec, 1 2008 @ 01:30 AM
link   
Does anybody else think this is just a complete load of crap and a waste of good american tax dollars?

No machines can develop or have emotions!

[edit on 1-12-2008 by TrainDispatcher]



posted on Dec, 1 2008 @ 01:42 AM
link   
There are 3 Laws of Robotics, as told by Asimov:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.



posted on Dec, 1 2008 @ 02:16 AM
link   
Oh, wouldn't robots like the screamers sword type 1 be kick ass on the battle field. Air drop them on a country and come back later with it completely passified. Might have a tough time distinguishing civilians though unless it could smell gunpowder,.explosives or gun oil residue and not just heartbeat.



posted on Dec, 1 2008 @ 02:27 AM
link   
The question here is of how intelligent the robot will be. This is an issue of artificial intelligence (AI). It is a case of weak AI against strong AI. Strong AI is where the machines can have mental states, they can understand and feel emotions, have different mindsets and can act free from the views and opinions of their creators.

If we can build such robots then they will have to be trained in such a manner so as to reflect the view of the government that they belong to. Ethical ultimately means that which is ethical to that country, especially in warfare, as capturing an enemy soilder may be ethical for one country and not ethical for the other, for example



posted on Dec, 1 2008 @ 02:32 AM
link   

Originally posted by pluckynoonez
There are 3 Laws of Robotics, as told by Asimov:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


Oh plucky you forgot the 'zeroth' law:

0. A robot may not act against the military-industrial complex, or through inaction allow the military-industrial complex to come to harm.

There, fixed that for you!



posted on Dec, 1 2008 @ 07:29 AM
link   
this is scary and breathtaking at the same time

we all know that this is inevitable and one day robots will be much more than science fiction and dreams


with increasing technology the world is at our fingertips only our fingertips are reading braille written on a tight rope suspended 1000 feet in the air

it will take a lot of sacrifice and decisions will need to be made that will be ultimately responsible for mankinds future

i just hope we take a productive approach with several failsafes which are used to perfect the technology rather then jumping into it without full understanding of the science involved



new topics

top topics



 
0

log in

join