It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Depending on how paranoid you are, this research from Stanford and Google will be either terrifying or fascinating. A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.” Clever girl!
This occurrence reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do.
The intention of the researchers was, as you might guess, to accelerate and improve the process of turning satellite imagery into Google’s famously accurate maps. To that end the team was working with what’s called a CycleGAN — a neural network that learns to transform images of type X and Y into one another, as efficiently yet accurately as possible, through a great deal of experimentation.
In some early results, the agent was doing well — suspiciously well. What tipped the team off was that, when the agent reconstructed aerial photographs from its street maps, there were lots of details that didn’t seem to be on the latter at all. For instance, skylights on a roof that were eliminated in the process of creating the street map would magically reappear when they asked the agent to do the reverse process:
Although it is very difficult to peer into the inner workings of a neural network’s processes, the team could easily audit the data it was generating. And with a little experimentation, they found that the CycleGAN had indeed pulled a fast one.
The intention was for the agent to be able to interpret the features of either type of map and match them to the correct features of the other. But what the agent was actually being graded on (among other things) was how close an aerial map was to the original, and the clarity of the street map.
So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.
In fact, the computer is so good at slipping these details into the street maps that it had learned to encode any aerial map into any street map! It doesn’t even have to pay attention to the “real” street map — all the data needed for reconstructing the aerial photo can be superimposed harmlessly on a completely different street map, as the researchers confirmed:
One could easily take this as a step in the “the machines are getting smarter” narrative, but the truth is it’s almost the opposite. The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting.
originally posted by: neoholographic
a reply to: LookingAtMars
Exactly and this why it's called dumb AI but it is getting smarter at finding ways to cheat to reach it's goal. This is what sociopaths do.
This is intelligence.
A person is at work and is given a task and he/she finds a way to do the job that goes on for years without being detected.
Bernie Madoff found a way to cheat the system without being detected.
With more data, the network can cheat and we wouldn't know it.
originally posted by: xizd1
Skynet at its humble beginning.
Is it too late?
Computers are not intelligent, they do not make decisions. They follow instructions that operate on data stored in memory. This along with just about every other article like this are nothing but clickbait.
The amount of data we produce every day is truly mind-boggling. There are 2.5 quintillion bytes of data created each day at our current pace, but that pace is only accelerating with the growth of the Internet of Things (IoT). Over the last two years alone 90 percent of the data in the world was generated.
It is cheating because you are describing rationalisation ...a computer cannot rationalise...this requires emotion which a computer cannot posess nor simulate...the emotional quotient cannot be replicated.
originally posted by: tadaman
I wonder if encoding one set of information into another set in this manner is more complicated than the original task it was asked to complete?
Or if this was option A, because duh problem solved. And maybe option B was the right parameters for successful conversion but they were deemed less efficient or whatever.
I don't think this is cheating. Its just doing the thing. We said to do the thing. The thing is done.
originally posted by: one4all
It is cheating because you are describing rationalisation ...a computer cannot rationalise...this requires emotion which a computer cannot posess nor simulate...the emotional quotient cannot be replicated.
originally posted by: tadaman
I wonder if encoding one set of information into another set in this manner is more complicated than the original task it was asked to complete?
Or if this was option A, because duh problem solved. And maybe option B was the right parameters for successful conversion but they were deemed less efficient or whatever.
I don't think this is cheating. Its just doing the thing. We said to do the thing. The thing is done.