It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

US Army is creating the ultimate AI killer by monitoring the brainwaves of supersnipers

page: 1
9

log in

join
share:

posted on May, 11 2017 @ 07:10 PM
link   
This is yet another step.


Army researchers are teaching artificial intelligence to learn from humans to become sharper shooters.

At the annual Intelligent User Interface conference, scientists from DCS Corp and the Army Research Lab revealed how training a neural network on datasets of human brain waves can improve its ability to identify a target in a dynamic environment.

The approach teaches AI to spot when a human has made a targeted decision, with hopes that it could one day be used to assess a battlefield scenario in real-time.


www.dailymail.co.uk...

This is VERY important. AI keeps reaching these milestones and this would be another one.

AI just beat experts in the game of Go, plays Atari and won in an incomplete information scenario with Poker.

The common thread with all of these things is, the environment isn't dynamic. AI has a tough time learning in a dynamic environment. This just means the environment changes.

When you play these games, the environment doesn't change. So AI can play a million games of poker or go against itself in order to get better and learn new strategies. It knows the environment of the Atari game it's playing but look at this game here.



AI learns how to play the game without any instructions. If you change some of the bricks to balls though, the system would have trouble.

They're training the AI to learn like a human and this way they can respond like a human in dynamic environments.


The study comes as part of the multi-year Cognition and Neuroergonomics Collaborative Technology Alliance, according to Defense One.

And, it’s a step toward artificial intelligence that can solve problems in a changing environment, such as a military setting.

‘We know that there are signals in the brain that show up when you perceive something that’s salient,’ said Matthew Jaswa, one of the authors on the paper.


By training an AI to recognize these signals using massive datasets of human brainwaves, it could one day be able to instantly understand, in real time, when a solider is making a targeted decision.


www.dailymail.co.uk...

EXTREMELY INTERESTING!

This needs to be said again:

By training an AI to recognize these signals using massive datasets of human brainwaves

This could be dangerous. It's something that's very smart and will allow AI to learn in a dynamic environment but we don't fully understand how these brain waves work and what actions are associated with these brain waves.

You could be training an AI to recognize signals in human brainwaves but what if it learns a signal associated with mass murder or genocide? You're talking about a MASSIVE DATASET of human brainwaves. This will be like the collective conscious of humans and there's no telling what it might learn. Sadly, humans have a capacity for brutal violence and they will have to be careful allowing AI to pick up our bad habits. We want AI to be better than us.

Also:


DARPA revealed it is funding eight separate research efforts to determine if electrical stimulation can safely be used to 'enhance learning and accelerate training skills.'

Ultimately, doing this could allow a person to quickly master complex skills that would normally take thousands of hours of practice.

The program, called the Targeted Neuroplasticity Training (TNT) program, aims to use the body's peripheral nervous system to accelerate the learning process.

This would be done by activating a process known as 'synaptic plasticity' – a key process in the brain involved in learning – with electrical stimulation.


www.dailymail.co.uk...

Imagine learning how to be a Doctor or learning Kung Fu in a matter of minutes like Neo:



Interesting times indeed



posted on May, 11 2017 @ 07:38 PM
link   
a reply to: neoholographic

We're being told that AI will never, ever turn on humans whilst at the same time they're training it to turn on 'some' humans?

This is PRECISELY why AI will just kill us all. We're so annoying.



posted on May, 12 2017 @ 03:09 AM
link   
a reply to: neoholographic You have point here, but when they start to teach the AI how to be humanly emotional, that's where I'll be very concerned, indeed. For now is just a game of research and preparation.




posted on Nov, 14 2022 @ 10:21 AM
link   

originally posted by: Joneselius
a reply to: neoholographic

We're being told that AI will never, ever turn on humans whilst at the same time they're training it to turn on 'some' humans?

This is PRECISELY why AI will just kill us all. We're so annoying.


Actually, world officials are quite vocal, besides Zuckerborg, about the huge danger of weaponized AI deployment without human monitoring to press the red button.



The question of what will happen when adversaries deploy autonomous weapons that do not seek a person in the loop for approval to use lethal firepower looms on the horizon for all militaries defending democratic states.

It seems reasonable to believe that even those states that have set some limits on AI capabilities will encounter adversaries who have no qualms about doing so, putting the states that limit integrating AI for national security at a considerable disadvantage.

www.c4isrnet.com...



posted on Nov, 14 2022 @ 10:32 AM
link   
the thing is from what i have read the British snipers with ptsd while the US ones are these perfect super soldiers..

the idea that we are programming ai with ptsd given the kind of job snipers do is a bit worrisome..




posted on Nov, 14 2022 @ 11:37 AM
link   
a reply to: nickyw

I have no idea how AI could develop some sort of PTSD. AI today is not sentient, but perhaps some sort of similar effect could be achieved. Most vets I have met with PTSD abhor violence and can recognize the bs nature of foreign and domestic policy where violence is involved.

If such a thing were to occur with sufficiently advanced AI, then there may well be a precedent for true objective morality and ethics based on the experience of AI and how it is less likely to do something that it knows will cause spikes of internal conflict.



posted on Nov, 14 2022 @ 10:19 PM
link   
a reply to: neoholographic


That Technology Already Has Existed for Decades........



........www.youtube.com...
edit on 14-11-2022 by Zanti Misfit because: (no reason given)



posted on Nov, 15 2022 @ 12:08 PM
link   
a reply to: Zanti Misfit

That video is 2 hours long and you do not give the slightest hint as to the relevance of your claim. What's the timestamp, at least? I'm hoping you made a mistake and are not just lazily posting in a great thread.



posted on Nov, 15 2022 @ 05:05 PM
link   

originally posted by: DirtWasher
a reply to: Zanti Misfit


That video is 2 hours long and you do not give the slightest hint as to the relevance of your claim. What's the timestamp, at least? I'm hoping you made a mistake and are not just lazily posting in a great thread.





Topic - Montauk LI. , and the " Philadelphia Experiment " (1943 TO 1983 ) Information from an INSIDER Who Was There , and Mind Control Technology Developed by the U.S. Military " in the 1980's.





OK Mr. Instant Gratification , If you Cannot View this Very Enlightening Video to the Half Way Mark at Least , and Somehow NOT Be Entertained or Enthralled by it's Content , then you Sir/And Or Madam , are in Elysium , and you're Already Dead !
edit on 15-11-2022 by Zanti Misfit because: (no reason given)



posted on Nov, 15 2022 @ 05:48 PM
link   

originally posted by: DirtWasher
a reply to: nickyw

I have no idea how AI could develop some sort of PTSD. AI today is not sentient, but perhaps some sort of similar effect could be achieved. Most vets I have met with PTSD abhor violence and can recognize the bs nature of foreign and domestic policy where violence is involved.

If such a thing were to occur with sufficiently advanced AI, then there may well be a precedent for true objective morality and ethics based on the experience of AI and how it is less likely to do something that it knows will cause spikes of internal conflict.




LOL , Ever See the FILM - " Sniper " with Mark Walberg ? Most Fiction is Based on Covert Reality Nowadays..........



posted on Nov, 15 2022 @ 06:40 PM
link   
a reply to: DirtWasher

You Have to See the Whites of a Mans Eyes Before you Slay Him . It's the ONLY Honorable Thing to Do . Machines Have No Conception of Honor .




top topics



 
9

log in

join