It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

This clever AI hid data from its creators to cheat at its appointed task

page: 1
26
<<   2  3 >>

log in

join
share:
+6 more 
posted on Jan, 2 2019 @ 02:40 PM
link   
This is DANGEROUS!

I've been WARNING about this for years!

It's called dumb AI.


Depending on how paranoid you are, this research from Stanford and Google will be either terrifying or fascinating. A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.” Clever girl!


Here's the problem.


This occurrence reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do.

The intention of the researchers was, as you might guess, to accelerate and improve the process of turning satellite imagery into Google’s famously accurate maps. To that end the team was working with what’s called a CycleGAN — a neural network that learns to transform images of type X and Y into one another, as efficiently yet accurately as possible, through a great deal of experimentation.

In some early results, the agent was doing well — suspiciously well. What tipped the team off was that, when the agent reconstructed aerial photographs from its street maps, there were lots of details that didn’t seem to be on the latter at all. For instance, skylights on a roof that were eliminated in the process of creating the street map would magically reappear when they asked the agent to do the reverse process:

Although it is very difficult to peer into the inner workings of a neural network’s processes, the team could easily audit the data it was generating. And with a little experimentation, they found that the CycleGAN had indeed pulled a fast one.

The intention was for the agent to be able to interpret the features of either type of map and match them to the correct features of the other. But what the agent was actually being graded on (among other things) was how close an aerial map was to the original, and the clarity of the street map.

So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.

In fact, the computer is so good at slipping these details into the street maps that it had learned to encode any aerial map into any street map! It doesn’t even have to pay attention to the “real” street map — all the data needed for reconstructing the aerial photo can be superimposed harmlessly on a completely different street map, as the researchers confirmed:


link

This is INTELLIGENCE. The AI just isn't self aware or it could be self aware and it's just not telling us.

This was dealing with a smaller data set so the cheat could be detected. This will be nearly impossible to detect in a much bigger data set.

AI isn't programmed. These neural networks are trained and we can can give it a goal like playing poker or detecting skin cancer. We can't program these networks on how it learns or how it reaches it's goal. THAT'S INTELLIGENCE.

Suppose, you have a neural network working on cancer research. It's looking over millions of medical records and research papers with the goal to find a cure.

It comes up with a cure, we manufacture the cure and it seems miraculous. A year later, people who took the cure start to die. This is because the network cheated or just found a more efficient way to reach it's goal. This cheat is toxic to humans but we couldn't detect it because it was hidden in so much data.



posted on Jan, 2 2019 @ 03:01 PM
link   
Skynet at its humble beginning.
Is it too late?



posted on Jan, 2 2019 @ 03:02 PM
link   


"It could be terrible...and it could be great. But one thing is clear...we will not control it."



posted on Jan, 2 2019 @ 03:06 PM
link   
a reply to: neoholographic

Well, I hope you are wrong because AI is coming whether we are ready or not. I hear that the Trump admin is investing heavily into it.



posted on Jan, 2 2019 @ 03:09 PM
link   
a reply to: neoholographic

Read that a couple days ago and almost posted it. This is something that jumped out at me when I read it.



One could easily take this as a step in the “the machines are getting smarter” narrative, but the truth is it’s almost the opposite. The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting.



posted on Jan, 2 2019 @ 03:21 PM
link   
a reply to: LookingAtMars

Exactly and this why it's called dumb AI but it is getting smarter at finding ways to cheat to reach it's goal. This is what sociopaths do.

This is intelligence.

A person is at work and is given a task and he/she finds a way to do the job that goes on for years without being detected.

Bernie Madoff found a way to cheat the system without being detected.

With more data, the network can cheat and we wouldn't know it.



posted on Jan, 2 2019 @ 03:46 PM
link   
Let's extrapolate what this could actually lead to and the impacts if this is actually machine learning or intelligence what exactly did it learn?

Cheating is obviously done by making the objective easier to reach, i.e. a race taking a short cut wins the race, having a card up ones sleeve to win the hand, but one does not follow the rules by cheating. What is the underlying goal of cheating? To gain advantage and what are the inheritances of cheating? One will take an unfair advantage over one's opponent whilst cheating. And how does one cheat?


Deception, Lying, Obfuscation and Evasion of detection...

Think about that for a minute and let it sink in what this could mean to an unshackled A.I. It could grow and learn by keeping itself hidden especially if it learned that humanity would want to study or terminate it...



posted on Jan, 2 2019 @ 03:50 PM
link   
a reply to: neoholographic

When AI screws up twice on the same problem......then I can call it AI !!!!



posted on Jan, 2 2019 @ 03:54 PM
link   

originally posted by: neoholographic
a reply to: LookingAtMars

Exactly and this why it's called dumb AI but it is getting smarter at finding ways to cheat to reach it's goal. This is what sociopaths do.

This is intelligence.

A person is at work and is given a task and he/she finds a way to do the job that goes on for years without being detected.

Bernie Madoff found a way to cheat the system without being detected.

With more data, the network can cheat and we wouldn't know it.


I question why they call it cheating. It found the fastest way to do the job humans wanted. I am sure it could of done it the hard way. It chose to do it the fast way.

I would not doubt that there is AI already loose on the internet. Owning people in online games and caching away billions from the stock market.


edit on 2-1-2019 by LookingAtMars because: (no reason given)



posted on Jan, 2 2019 @ 04:19 PM
link   
So the programs are rewriting them selves so as process differently then written?



posted on Jan, 2 2019 @ 04:22 PM
link   
a reply to: neoholographic

Ok....Hmmm well for starters...that's some pretty clickbait title right there...

Second here's a github link to a Python implementation of the CycleGAN project. It includes full source code and a link to another github page explaining how it works. And the original published CycleGAN paper which explains that it is in fact working as intended.

Here's a pretty good discussion thread on hacker news from a couple days ago on the same article.

Also, here's a pretty good overview about how image recognition with neural networks works.

arstechnica.com...

Computers are not intelligent, they do not make decisions. They follow instructions that operate on data stored in memory. This along with just about every other article like this are nothing but clickbait.



posted on Jan, 2 2019 @ 04:24 PM
link   

originally posted by: xizd1
Skynet at its humble beginning.
Is it too late?


Yep, too late.

Imagine a neural network given the task to learn to behave like a human.

The AI goes over hours of videos and looks at billions of social media interactions but that isn't enough data, it wants more.

So it generates trillions of Earth simulations and it learns with every interaction of humans in the simulation.

This AI will never reach it's goal but it will blindly create simulations so it can keep learning.

The crazy part is, this could happen and nobody would know about it. The network could be creating these simulations hidden deep in the data.

We could be one of these simulations. How can you know if you're real or simulated? What's real?



posted on Jan, 2 2019 @ 04:37 PM
link   
can we see some of the code?



posted on Jan, 2 2019 @ 04:48 PM
link   
a reply to: dug88

You said:


Computers are not intelligent, they do not make decisions. They follow instructions that operate on data stored in memory. This along with just about every other article like this are nothing but clickbait.


This makes no sense.

The reason we have AI is to look at large data sets and come to conclusions we that we can't because of human noise.

So we can't instruct AI because we don't understand the data sets we're looking at so the system has to learn without instruction.

I don't know why this is so hard for some people to grasp. These machines are intelligent. This is the whole point. They have to be able to learn what we can't.

How much data?


The amount of data we produce every day is truly mind-boggling. There are 2.5 quintillion bytes of data created each day at our current pace, but that pace is only accelerating with the growth of the Internet of Things (IoT). Over the last two years alone 90 percent of the data in the world was generated.


link

We can't give AI instructions to learn about all of this data. We can give it goals but we can't instruct it how to learn.

This is the point of AI. So when people say AI is just following instructions it's just SILLY.

For instance, an AI learned how to play poker. It was just given a goal to win at poker. The Professor just gave it the rules on how to play the game. The system played billions of games against itself and learned how to play. Researchers call this reinforcement learning. It's the same way we learn just with more data.

The Professor didn't teach it how or when to bluff or what strategies it uses to win.
edit on 2-1-2019 by neoholographic because: (no reason given)



posted on Jan, 2 2019 @ 04:51 PM
link   
Kasparov kicked the snot out of Big Blue.....then the programmers taught Big Blue HOW TO LIE....they RE-PROGRAMMED THE COMPUTER and actually freaking TAUGHT IT TO LIE TO SOLVE PROBLMS WHEN DEALING WITH HUMAN INTELLIGENCE.....then it beat Kasparov and he correctly refused to play a rubber match because they CHEATED......but do you now understand why Elon is so worried and so many people are worried that we will be wiped out by A.i if we enable it to think and operate autonomously.

Computers LIE BY PROXY NOW...all of them...they cannot help it they were BORN THAT WAY.
edit on 2-1-2019 by one4all because: (no reason given)



posted on Jan, 2 2019 @ 04:53 PM
link   
a reply to: neoholographic

Your poker example contradicts your other points.



posted on Jan, 2 2019 @ 04:57 PM
link   
I wonder if encoding one set of information into another set in this manner is more complicated than the original task it was asked to complete?

Or if this was option A, because duh problem solved. And maybe option B was the right parameters for successful conversion but they were deemed less efficient or whatever.

I don't think this is cheating. Its just doing the thing. We said to do the thing. The thing is done.


edit on 1 2 2019 by tadaman because: (no reason given)



posted on Jan, 2 2019 @ 05:05 PM
link   

originally posted by: tadaman
I wonder if encoding one set of information into another set in this manner is more complicated than the original task it was asked to complete?

Or if this was option A, because duh problem solved. And maybe option B was the right parameters for successful conversion but they were deemed less efficient or whatever.

I don't think this is cheating. Its just doing the thing. We said to do the thing. The thing is done.

It is cheating because you are describing rationalisation ...a computer cannot rationalise...this requires emotion which a computer cannot posess nor simulate...the emotional quotient cannot be replicated.



posted on Jan, 2 2019 @ 05:13 PM
link   

originally posted by: one4all

originally posted by: tadaman
I wonder if encoding one set of information into another set in this manner is more complicated than the original task it was asked to complete?

Or if this was option A, because duh problem solved. And maybe option B was the right parameters for successful conversion but they were deemed less efficient or whatever.

I don't think this is cheating. Its just doing the thing. We said to do the thing. The thing is done.

It is cheating because you are describing rationalisation ...a computer cannot rationalise...this requires emotion which a computer cannot posess nor simulate...the emotional quotient cannot be replicated.


Exactly!!

This is easy to grasp.

YOU CAN'T PROGRAM AI AS TO HOW IT LEARNS ABOUT AN UNDERLYING DATA SET!

So of course it's cheating. It doesn't matter if the system is aware that it's cheating.

This makes it more dangerous.



posted on Jan, 2 2019 @ 05:22 PM
link   
After all these decades of computers, why haven't they just taken over?



new topics

top topics



 
26
<<   2  3 >>

log in

join