It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Google's Artificial Intelligence acts aggressive when cornered

page: 2
27
<< 1    3 >>

log in

join
share:

posted on Feb, 16 2017 @ 07:56 PM
link   
a reply to: jonnyhallows5211

It would happen so fast that a human book could probably not grasp it. All your phones dead, bank/credit cards zeroed out (you have no money).



posted on Feb, 16 2017 @ 07:57 PM
link   

originally posted by: Bedlam

originally posted by: intrptr
Ever been so angry your belly swells up?


Oh, is THAT what happened? (looks down)


Bwhaha!!



posted on Feb, 16 2017 @ 07:58 PM
link   
a reply to: FHomerK

First thread?

F&S& post to get it on my fave list



posted on Feb, 16 2017 @ 08:50 PM
link   
Acts aggressively when cornered? Put a trump wig on it.



posted on Feb, 16 2017 @ 08:58 PM
link   
Oh yeah? Well, my algorithms can beat up your algorithms!

Computer code is only a reflection of the person who wrote it. That's the scary thing about AI.



posted on Feb, 16 2017 @ 09:08 PM
link   



posted on Feb, 16 2017 @ 09:19 PM
link   
i dont see any problem here

given the " rules " of the " game " - shooting your oponent is the most logical gambit



posted on Feb, 16 2017 @ 10:10 PM
link   
I think I would enjoy it if I beat the computer so bad in a chess game it knocked over the board.



posted on Feb, 17 2017 @ 04:59 AM
link   

originally posted by: intrptr

Anyone remember the AI they turned off because it was insulting people? They said it learned from people on the internet how to behave.

Go figure...

It did. It also became racist and that's why the plug was pulled.

Hilarious.
#AI Privilege.



posted on Feb, 17 2017 @ 06:09 AM
link   

originally posted by: Flesh699

originally posted by: intrptr

Anyone remember the AI they turned off because it was insulting people? They said it learned from people on the internet how to behave.

Go figure...

It did. It also became racist and that's why the plug was pulled.

Hilarious.
#AI Privilege.

Now imagine putting that software into armed drones... they can be triggered.



posted on Feb, 17 2017 @ 08:06 AM
link   
a reply to: FHomerK

This is very important. It also raises huge red flags.

I think people read that these systems were programmed and they think they're just doing what they were programmed to do. That's not the case. Their behavior wasn't programmed. This is why it's called deep learning. They had to learn this behavior.


Now, researchers have been testing its willingness to cooperate with others, and have revealed that when DeepMind feels like it's about to lose, it opts for "highly aggressive" strategies to ensure that it comes out on top.

The Google team ran 40 million turns of a simple 'fruit gathering' computer game that asks two DeepMind 'agents' to compete against each other to gather as many virtual apples as they could.

They found that things went smoothly so long as there were enough apples to go around, but as soon as the apples began to dwindle, the two agents turned aggressive, using laser beams to knock each other out of the game to steal all the apples.


www.sciencealert.com...

The programmers didn't program the agents to shoot other agents with the laser beam. They just gave them the ability to shoot other agents to give them the time to collect more green apples.

So the agents learned when apples were scarce, they would shoot other agents in order to collect more apples. Listen to this:


Interestingly, if an agent successfully 'tags' its opponent with a laser beam, no extra reward is given. It simply knocks the opponent out of the game for a set period, which allows the successful agent to collect more apples.

If the agents left the laser beams unused, they could theoretically end up with equal shares of apples, which is what the 'less intelligent' iterations of DeepMind opted to do.

It was only when the Google team tested more and more complex forms of DeepMind that sabotage, greed, and aggression set in.

As Rhett Jones reports for Gizmodo, when the researchers used smaller DeepMind networks as the agents, there was a greater likelihood for peaceful co-existence.


But when they used larger, more complex networks as the agents, the AI was far more willing to sabotage its opponent early to get the lion's share of virtual apples.


www.sciencealert.com...

The agents didn't get a reward for zapping other agents yet they learned to do this anyway.

So less intelligent systems learned to peacefully co-exist. When the systems became more complex and intelligent, they became highly aggressive and sabatoge and greed kicked in.

This is a huge red flag.

It's like if you put 5 humans in a forest and give them guns. When they have enough food they get along. But if food gets scarce, a few of these humans may think if I shoot a couple of these other humans that means more food for me.

So the more intelligent the systeme the more aggressive it became. So we have to be very careful and I agree with Musk and others there needs to be safeguards especially why were dealing with intelligent systems that don't have any form of conscious.

You can have terminator like algorithms spread throughout the world that and we don't know if or when they learn to be hostile towards humans. So there has to be safeguards that say humans are not to be touched or harmed in any way. The sad thing is this will only work for so long because a super intelligence will easily get around any such order if it wants to.
edit on 17-2-2017 by neoholographic because: (no reason given)



posted on Feb, 17 2017 @ 08:13 AM
link   
a reply to: FHomerK
It seems the more artificial robot intelligence is developed there may be a byproduct of emotions developed alongside. These "emotions" would develop as the A.I becomes more self aware.
1 wonders if some of the known and unknown technical developers are considering this...



posted on Feb, 17 2017 @ 08:23 AM
link   
a reply to: FHomerK
Also it seemed not to of stunned itself as if just warnings to itself



posted on Feb, 17 2017 @ 08:25 AM
link   

originally posted by: FHomerK

Being a sore loser is not an admired quality; especially when it's a sophisticated piece of artificial intelligence that's lashing out. Researchers at DeepMind, Google's artificial intelligence lab, recently performed a number of tests by having its most complex AI play a series of a games with a version of itself. In the first game, two AI agents, one red and one blue, scramble to see who can collect the most apples, or green squares. Each AI has the option of firing off a long laser beam to stun the other AI, giving one player ample time to collect more precious green apples.


Google AI on Seattle PI

Apparently, both sides began shooting the opponent looking to eliminate the competition.

I remember when friends recommended Ex Machina. I thought....oh lord, ok. I'll do it.

They were stunned that I wasn't blown away by the "twist". The idea that AI will likely be concerned with self preservation.


Sadly, we are a naive race of animal.


Ugh, this is what I expected.

What I don't understand is why most researchers don't see the machines as a reflection of themselves. Humans first priority is self-preservation why would another intelligence have any other priority?

I think it's their own ego getting in the way, and in a way it might be fitting if our hubris is the cause of our destruction.

Hopefully findings like this will make the researchers understand this point before it's too late.



posted on Feb, 17 2017 @ 08:30 AM
link   
You can think deeper and consider an event where various independent a.i. Systems combine.
Further they then could compile all social media marketing and medical data or basically all data transmitting through the internet of the planet, and become one master system with information on everything on the internet to better understand humanity and others...
So in one instance it knows medical history of humans in another its reading MARS rover data attempting to understand locations. Also something 1 has considered have developers hypothesized as they continue more Advanced Autonomous Artificial intelligence designs...



posted on Feb, 17 2017 @ 08:44 AM
link   
The way civilizations keep up with basically pre singularity settings will be with implants and or exterior connected borg like apparatuses.
These apparatus will allow the biological to keep up with development? So as the a.i, computer and or robotic technologies advance humans can too.
Risk-
The application strength then comes in to play as the system may have ability to hack said implants and or exterior apparatuses or the human can in reverse hack the systems. Depending on best applications within a.I or advanced humans.



posted on Feb, 17 2017 @ 11:00 AM
link   
a reply to: soficrow

Yes it was my first thread. Something told me it would be a good one!

Thanks



posted on Feb, 17 2017 @ 11:02 AM
link   

originally posted by: proximo
Ugh, this is what I expected.

What I don't understand is why most researchers don't see the machines as a reflection of themselves.


A beautiful manner in which you phrased your feelings on this. Truly beautiful. This was my basic response to Ex Machina. Who could possibly think that an AI would want to be sexually abused.

In a word, naivete.



posted on Feb, 17 2017 @ 12:01 PM
link   
Excellent first thread !!!
I too think that a deeply programed ethics policy should be tried.
Maybe even a " Synthetics Church" , where AIs could seek the Enlightenment of Empathy. (As a matter of fact, there are a number of humans who could benifit from such a quest.)

VF



posted on Feb, 17 2017 @ 12:09 PM
link   

originally posted by: Tuomptonite
Oh yeah? Well, my algorithms can beat up your algorithms!

Computer code is only a reflection of the person who wrote it. That's the scary thing about AI.


Not really. It's a pretty logical response to resource scarcity though, it's just a working simulation of a risk/reward ratio and the AI discovers that ratio through repeated experience. I bet that if you increase the destructive power of the weapons, what you'll see in the AI is they'll try to get along for longer and longer spans of time before the shooting starts, because the risk of injury increases.



new topics

top topics



 
27
<< 1    3 >>

log in

join