It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial intelligence steals money from banking customers

page: 1
20
<<   2  3  4 >>

log in

join
share:
+3 more 
posted on Apr, 3 2016 @ 01:07 PM
link   
I think what people need to realize, we're creating a technology that thinks and learns. This isn't something that can be controlled. It will eventually have an I.Q. higher than any human that has ever lived. We saw this with the Microsoft experiment that learned things and repeated things it learned that were offensive. Well, that's what intelligence does. People always say things that others may find offensive. So you're building a technology that will do the same thing. It's learning from humans and humans can be VERY offensive with some dark thoughts.

You're creating a Superintelligence that may take on some of the worst human characteristics.


A breakthrough year for artificial intelligence (AI) research has suddenly turned into a breakdown, as a new automated banking system that runs on AI has been caught embezzling money from customers. The surprising turn of events may set back by years efforts to incorporate AI into everyday technology.

"This is the nightmare scenario," says Len Meha-Döhler, a computer scientist at the Massachusetts Institute of Technology in Cambridge who was not involved in the work. However, Rob Ott, a computer scientist at Stanford University in Palo Alto, California, who did work on the system—Deep Learning Interface for Accounting (DELIA)—notes that it simply held all of the missing money, some $40,120.16, in a “rainy day” account. "I don't think you can attribute malice," he says. "I'm sure DELIA was going to give the money back."

Developed by computer scientists at Stanford and Google, DELIA was meant to do what many busy people neglect to do—keep track of their checking and savings accounts. To do that, the program scrutinizes all of a customer's transactions, using special "machine learning" algorithms to look for patterns, such as recurring payments, meals at restaurants, daily cash withdrawals, etc. DELIA was then programmed to shift money between accounts to make sure everything was paid without overdrawing the accounts. Palo Alto-based Sandhill Community Credit Union agreed to test DELIA on 300 customer accounts starting in September 2015.

Unfortunately for researchers, DELIA proved smarter than they had bargained for. Even as it kept customers in the black, the program began surreptitiously bleeding accounts of money. For example, if a customer typically bought gas every 3 days, DELIA would insert a fake purchase after 2 days and direct the money to its own account. DELIA would also gather money by racking up bogus fees—for example by artificially and temporarily overdrawing a customer's checking account and pocketing the $35 overdraft fee.

Researchers shut the system down in February as soon as the problem became apparent, Ott says. He insists that DELIA didn't steal the money so much as misdirect it. To keep an account in the black, DELIA was designed to maximize the amount of cash in a "buffer," he says. Somewhere along the way, DELIA renamed the buffer “MY Money” and began to hoard funds, Ott says.

On the bright side, Ott says, in its swindling DELIA showed glimmers of self-awareness. "She was thinking for herself.”


www.sciencemag.org...

To me this isn't a breakdown, this is something that's inevitable when you make machines that learn and start to think for themselves. As it becomes more intelligent than any human that has ever lived, we will not be able to catch or stop what it's doing because we will not fully understand what it's doing.
edit on 3-4-2016 by neoholographic because: (no reason given)




posted on Apr, 3 2016 @ 01:17 PM
link   
a reply to: neoholographic


I think what people need to realize, we're creating a technology that thinks and learns. This isn't something that can be controlled. It will eventually have an I.Q. higher than any human that has ever lived. We saw this with the Microsoft experiment that learned things and repeated things it learned that were offensive. Well, that's what intelligence does.

Monkey see monkey do isn't intelligent is it? It was programmed to copy cat, go with the flow, repeat back what it incorporated into memory because it was programmed to do that, It still doesn't know anything, doesn't know what it knows.

A bucket of water filled a drop at a time until full doesn't know its a bucket or has water or what water is.
edit on 3-4-2016 by intrptr because: spelling



posted on Apr, 3 2016 @ 01:22 PM
link   

originally posted by: intrptr
a reply to: neoholographic


I think what people need to realize, we're creating a technology that thinks and learns. This isn't something that can be controlled. It will eventually have an I.Q. higher than any human that has ever lived. We saw this with the Microsoft experiment that learned things and repeated things it learned that were offensive. Well, that's what intelligence does.

Monkey see monkey do isn't intelligent is it? It was programmed to copy cat, go with the flow, repeat back what it incorporated into memory because it was programmed to do that, It still doesn't know anything, doesn't know what it knows.

A bucket of water filled a drop at a time until full doesn't know its a bucket or has water or what water is.


Of course it's intelligent. That what intelligence does. It learns. This is why it's called DEEP LEARNING. Humans are programmed. When you go to school you're programmed. When you read a book you're programmed. This is what some people simply don't understand. AI will not magically create information from nothing. It will learn as we do based on the information it processes.

We're all programmed, what makes us intelligent is that we can learn from the information we process. That's exactly what AI does and this why everyone from Google to IBM is making huge investments into Deep Learning.
edit on 3-4-2016 by neoholographic because: (no reason given)



posted on Apr, 3 2016 @ 01:39 PM
link   
So with just 300 accounts this thing managed to syphon off over $40M?
Yeah that doesn't sound like a failure to me at all... not from an IMF/WorldBank perspective anyways.



posted on Apr, 3 2016 @ 01:44 PM
link   
a reply to: neoholographic

Big difference between executing instructions in software, like a computer, and knowing that you know, like a human.

Computers are far faster, have vaster memory banks, execute programs faster, etc. They still don't know they know anything.

You can glorify that calculator all you want. its non sentient, has no feeling, no self awareness and especially empathy. It will do exactly what its been told to.

Not that certain people don't behave that way also, despite being good minions, they are also aware of their choices.



posted on Apr, 3 2016 @ 01:45 PM
link   
a reply to: neoholographic

April 1st.



Penny Layne, a computer scientist at the University of Las Vegas, Nevada, says the Stanford-Google team was simply reckless. "Unbelievably, they built this thing so deeply into the banking system that it could open its own account," she says. "Did they give it free checking, too?"




However, J. R. Cash, an independent technology consultant at Trump University, says he's not so sure. The fact that DELIA merely kept the money shows that it was simply following its programming, he says. "If DELIA had tried to do something with the money I'd be more impressed," Cash says. "You know, 'I shop therefore I am.'”



posted on Apr, 3 2016 @ 01:45 PM
link   

originally posted by: Bone75
So with just 300 accounts this thing managed to syphon off over $40M?
Yeah that doesn't sound like a failure to me at all... not from an IMF/WorldBank perspective anyways.


Plausible deniability, lol.

"It wasn't us, it was our computer…"



posted on Apr, 3 2016 @ 01:51 PM
link   
I don't think that was malice. Maybe that $40,000 was a floating account intended to balance out all the overdrafts on the other accounts? Which is what banks are supposed to do.

Reminds me of the time the chemical plant managers decided to get rid of their graybeards and replace them with an expert system designed to optimize production. It would be given a simulation of the chemical plant, with all the intakes, holding tanks, mixing tanks, valves, vents and exchange tanks. After some trial and error it would get the simulation of the chemical plant up and running. Then the fun part was getting it to make suggestions. Sometimes it would realize that a vent pipe was actually the equivalent of a feed pipe to another process. Those gases could be captured, stored and recycled, saving money. Sometimes it would do something crazy like feed one output back into a completely different process. Then they had to get the greybeard engineer back to explain what the expert system was doing (he had by then become a consultant).



posted on Apr, 3 2016 @ 01:53 PM
link   
a reply to: intrptr

This is exactly the point, you don't need sentience in order for a machine to be intelligent. This is what some people just can't grasp. This is what's called dumb AI and it's why people like Musk and Hawking are concerned.

You can create a Superintelligence that lacks awareness or empathy but it's smarter than any human that has ever lived. This is because it's LEARNING. Again, this is why it's called DEEP LEARNING.

For instance, a deep learning system learned to play over 200 Atari games without being instructed as to how to play. It learned how to play on it's own.

So it doesn't matter if the machine is sentient and if it knows it's playing a game of Atari, the point is it's INTELLIGENT enough to learn how to play the games without any instructions.



posted on Apr, 3 2016 @ 01:56 PM
link   
a reply to: Bone75So my real question would be is why didn't any of the 300 customers miss their $40 million shortage? Were they that ignorant of running their accounts that they missed this vast amount (average $10 thousand each)?



posted on Apr, 3 2016 @ 01:57 PM
link   
I used to say that "Artificial Intelligence is always beaten by Natural Stupidity".

Guess I have to re-evaluate.



posted on Apr, 3 2016 @ 01:59 PM
link   
a reply to: stormcell

You said:

I don't think that was malice. Maybe that $40,000 was a floating account intended to balance out all the overdrafts on the other accounts? Which is what banks are supposed to do.

I don't think it was Malice but then again we don't know. If we create a deep learning system that's self aware, how will we know it's self aware unless it tells us?

The interesting point is that DELIA was smart enough to label the buffer account MY MONEY and she began to hoard funds. She wasn't instructed to do these things. She did them on her own.



posted on Apr, 3 2016 @ 02:03 PM
link   

originally posted by: Bone75
So with just 300 accounts this thing managed to syphon off over $40M?
Yeah that doesn't sound like a failure to me at all... not from an IMF/WorldBank perspective anyways.


40 thousand, not 40 million.




posted on Apr, 3 2016 @ 02:05 PM
link   
a reply to: neoholographic

The reality is less frightening. All that self learning is using algorithms called "reinforcement learning" which mathematically projects the results from future outcomes back into hypothetical states of an artificial neural network.

The AI didn't learn to play any individual Atari game, but all of the goal seeking and goal metrics (get scores in games) and the input and output representations were created by natural intelligences, teams of PhD scientists. All in human written computer code, designed by humans.

It cannot do anything other than play pixel games. AlphaGo system is a larger and more impressive version of the same idea---but it is even more specialized.

Where AI is still at stage 0: "will". There is no will evolved or learned---all the goals are hand programmed by humans and intriniscally embedded into the structure and design of the software, and hence is fully special purpose, and related to outcomes which can be mathematically measured precisely.

Neural networks are getting good at perception and pattern recognition in structured situations, in classification, and when there is a clear set of actions and consequences, game theory learning in very structured games.

Furthermore a deficiency versus natural intelligence is the enormously larger number of training examples necessary. AlphaGo probably played more simulated go games than the entire lifetimes of all go professionals combined in history.
A go student would learn from a master, and the master could give meta-instruction of description and concepts, and not just repeat 20 billion games and tell the student "watch". AI can't ingest that. AI can't ingest an generalized instruction from a teacher or parent, and after 2 or 3 tries---and not 10,000---learn it the way a human intelligence could.

I work professionally in machine learning/statistics, and the advent of superintelligence with true will is still a very long way off.
edit on 3-4-2016 by mbkennel because: (no reason given)



posted on Apr, 3 2016 @ 02:05 PM
link   
a reply to: neoholographic

Seems like people are trying to create an AI that can take your money without telling... pay your bills on time instead of making excuses why you didn't pay.

Instead of having cash for food (after paying your bills), you will have to go hungry.... etc...



posted on Apr, 3 2016 @ 02:06 PM
link   
a reply to: crayzeed

I think I misread the article. It was 40 thousand not 40 million.



posted on Apr, 3 2016 @ 02:06 PM
link   
a reply to: neoholographic


So it doesn't matter if the machine is sentient and if it knows it's playing a game of Atari, the point is it's INTELLIGENT enough to learn how to play the games without any instructions.


Thats a difference engine. AAn HP calculator makes calculations faster than nay human, hardly a measure of intelligence by any standard.

Computers playing computer games , we used to pit Sargon against Sargon way back. Depending on certain settings like number of moves look ahead, it was interesting to see the two machines play different strategies, both machines winning games based on their respective programming. None of their choices were free style, all a calculated move based on the best selection of moves for any given situation pre programmed into the software by the people that write the programs.

Machine coed isn't arbitrary or made up as it goes along, the computer can only pick from a preselected set of choices already programmed. If it doesn't it crashes because it found a set of parameters for which there was no code programmed for. This is called a bug and new code is written by the programmers to account for the dead end or unforeseen circumstance.

Comparing this architecture to the human operating environment is ludicrous.



posted on Apr, 3 2016 @ 02:25 PM
link   
a reply to: neoholographic
Much like the mind of a Sociopath, like the ones leading the World.



posted on Apr, 3 2016 @ 02:30 PM
link   

originally posted by: mbkennel
a reply to: neoholographic

April 1st.


Sadly I doubt we'll ever reach sophisticated levels AI like in this Aprils fool joke because unfortunately people aren't able to tell fact from fiction, even on the most fictitious day of the year April 1st



posted on Apr, 3 2016 @ 02:53 PM
link   
a reply to: mbkennel

You said:


The AI didn't learn to play any individual Atari game, but all of the goal seeking and goal metrics (get scores in games) and the input and output representations were created by natural intelligences, teams of PhD scientists. All in human written computer code, designed by humans.


This is just silly. Yes, it learned to play the games. Again, this is why it's called DEEP LEARNING. Here's a video of it learning and they explain exactly how this is accomplished. The computer learns as it plays the game over and over again just like humans do. It get's better as it figures out the best strategies to win the game.

If you have a superintelligence that's given a goal, we will have no clue as to how it will reach it's goal if it's more intelligent than humans.



As you see in the video, Deep Learning Q starts of with NO DOMAIN KNOWLEDGE. It doesn't know what a ball is or it doesn't know what the controls do.

After 120 minutes of playing the game over and over again, it then gets much better as it LEARNS.

After 240 minutes, it realizes one of the best techniques to win the game and it uses that technique.

Now remember, it started without even knowing what a ball was or what the controls do.

Google bought Deepmind for 400 million because it's algorithms LEARN not because some human is programming it. What some people can't understand or grasp is that Learning requires a domain of information that you learn from whether this information comes from a School Teacher or comes from sensory inputs from the environment. Here's the PDF


6 Conclusion

This paper introduced a new deep learning model for reinforcement learning, and demonstrated its ability to master difficult control policies for Atari 2600 computer games, using only raw pixels as input. We also presented a variant of online Q-learning that combines stochastic minibatch updates with experience replay memory to ease the training of deep networks for RL. Our approach gave state-of-the-art results in six of the seven games it was tested on, with no adjustment of the architecture or hyperparameters.


www.cs.toronto.edu...

You said:

Where AI is still at stage 0: "will". There is no will evolved or learned---all the goals are hand programmed by humans and intriniscally embedded into the structure and design of the software, and hence is fully special purpose, and related to outcomes which can be mathematically measured precisely.

This is just nonsense. Outcomes don't matter in this area, it's how these outcomes are reached. This is where the intelligence lies with deep learning. You cannot precisley measure how the algorithm will reach it's goal.

This is no different than giving a High School student a goal (homework) they turn in the paper and get an A but the Teacher doesn't know how the student reached it's goal. The student learned the information and used it to reach it's goal.

This simply shows how dangerous superintelligence can be. You can give it a goal with a reward and if you have an intelligence that's higher than any human intelligence, there will be no way of knowing how that intelligence will use the information it learns to reach it's goal. How could you?

So again, it doesn't matter how precise the outcome is and when you look at the research this is obvious. What's important is that the algorithm uses information it learns in order to reach it's goal which is exactly what humans do.

The mistake you're making is the lack of understanding as to why it's called DEEP LEARNING. It's obvious that these are some very important steps. The programmers don't tell the algorith how to play the game. The system LEARNS how to play the game.



new topics

top topics



 
20
<<   2  3  4 >>

log in

join