It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial intelligence steals money from banking customers

page: 3
20
<< 1  2    4 >>

log in

join
share:

posted on Apr, 4 2016 @ 12:05 AM
link   
Phew...i thought the thread stated "steal money from banks"...but then noticed it was "banking customers".

Carry on.



posted on Apr, 4 2016 @ 01:26 AM
link   
a reply to: neoholographic


Somewhere along the way, DELIA renamed the buffer “MY Money” and began to hoard funds, Ott says.

On the bright side, Ott says, in its swindling DELIA showed glimmers of self-awareness. "She was thinking for herself.”

If this is true, it's extremely impressive, but some how I doubt an A.I. system designed for money management would learn the English language and then rename the buffer account to imply it was the owner of that money.

EDIT: So it was an April fools joke? Then this shouldn't still be in the science forum.
edit on 4/4/2016 by ChaoticOrder because: (no reason given)



posted on Apr, 4 2016 @ 05:19 AM
link   

originally posted by: mbkennel

The AI didn't learn to play any individual Atari game, but all of the goal seeking and goal metrics (get scores in games) and the input and output representations were created by natural intelligences, teams of PhD scientists. All in human written computer code, designed by humans.



originally posted by: neoholographic

This is just silly. Yes, it learned to play the games. Again, this is why it's called DEEP LEARNING. Here's a video of it learning and they explain exactly how this is accomplished. The computer learns as it plays the game over and over again just like humans do. It get's better as it figures out the best strategies to win the game.


Sorry but I have to share my long-winded thoughts on this conversation because both you guys are bringing up some great points. In some ways you are both wrong, and in some ways you are both right. If mbkennel is correct when he said an evolutionary search is used, that basically means the artificial neural network essentially undergoes random mutations, and beneficial mutations are propagated/reinforced in future generations, so each generation will get better. However, I just glanced over the paper explaining how the "deep Q-network" works and it actually seems to use something more efficient but more complicated than an evolutionary search. It uses a type of reinforcement learning which does seem to learn something from each game it plays.

With an evolutionary search, you have to create a lot of mutated ANN's, then test each one to see what mutations worked best, which means you need to simulate many games for each of those ANN's and it can be very computationally expensive because it may take many generations before the ANN performs well. Also, that's not really how humans learn, we learn something from each game we play and we're much better after only playing a few games. The evolutionary search wont produce something which improves with time, it just lets you evolve an ANN that may perform well but wont improve. If you want it to get better then you need to continue running the evolutionary search algorithm, meaning it's not an adaptive method and the ANN will not perform well on tasks it wasn't evolved to handle.

The deep Q-network seems to use a method where it will learn from each game it plays, meaning it can adapt to new situations it was never trained for, which is some what similar to how humans learn. However mbkennel is still correct about several important points. It's still not exactly how humans learn, when a human wants to solve a problem, we don't just try different things to see what works, and we think a lot about our actions before we make them. Humans can improve at a game simply by watching the game being played or reading an instruction manual for the game. A person who doesn't know anything about a game will almost always be worse at it than some one who has seen it played before or has read about the game and how its played.

Also it's very important to realize that the programmers have to explicitly tell the A.I. what the goal of the game is even with the deep Q-network. Sure the ANN may not know anything about the game when it first starts playing, but it must have a way of knowing when it's winning and when it's losing so that it can reinforce winning strategies and abandon losing strategies. That means it must know the score of the game or have some other mechanism which directly tells the ANN how well it is doing. In other words, it cannot get better at playing any game without first having a way to rate its own performance when it plays, and that requires a way to score the playing performance.

Typically this is done just by directly reading the game memory where the score is located, and feeding that straight into the ANN, but that is not a very general method because it means you cannot simply show any game to the ANN and expect it to learn how to play that game without any guidance. It wont have any way to read the score unless it has figured out how to read words and numbers and also figured out where the score is located on the screen. But that its self is a very general problem which is hard to solve. Not so much text recognition, but understanding how games layout the score on the screen is very difficult because it's often very different depending on the game, and some games don't even have a score or goal to speak of.

The the point mbkennel made about "will" and goals is a very valid one. These algorithms may be able to learn on their own, but only if they have a way of knowing whether they are actually improving or getting worse. And this is fundamentally why I disagree with neo's conclusion about dumb A.I. representing a huge threat to humanity. They aren't a threat to us because they will only ever solve the task we program them to do, they wont have the motive or self-awareness required to form their own goals or take action to fulfill those goals. Even though it's just a joke, this thread shows us that the biggest threat is humans using weak A.I. in stupid ways and giving it way too much control over important infrastructure. As long as we don't start giving A.I.'s their own bank accounts we should be fine.

Before I end up this point I also want to quickly explain what deep learning really means because it seems like many people use it without actually knowing what it means. Deep learning is essentially any technique which uses deep ANN's to solve a problem. A deep ANN is usually a convolutional artificial neural network that has many layers, such that the features on each layer are structured hierarchically, producing increasingly abstract feature detectors with each layer in the network. In simple terms, if the network is trained for character recognition, the first layer may contain a feature detector which detects a horizontal line near the top of the image and another detector for curved lines near the bottom of the image. Then the second layer may have a feature detector which is activated when those two detectors in the first layer activate.

If that feature detector in the second layer activates we may be able to conclude the character is the number 5 because it has a flat line on the top and a curved line at the bottom. But that wouldn't be very precise because hand-written characters can be messy and inexact. We would typically want to combine many more features before reaching a conclusion. This also works for something like object recognition, for example if we detect wheels and headlights then there's a good chance it's a car. By combining many simple features, we build more abstract representations, then we can also combine those representations to produce even more complex representations, that's why it's such a powerful method.

The "deeper" the network is, the more complex the representations become. However it's often very hard to create the features in the first place and it gets harder the deeper the net becomes. The whole trick with the deep Q-network is they were able to find a way to apply reinforcement learning to deep ANN's so that the features are updated in real time as the ANN plays the game. That does sound like a very powerful technique and I'm not entirely sure what the limitations of it would be, but I do know it wont be able to make up its own goals, it will always focus on improving what ever score it's designed to read.
edit on 4/4/2016 by ChaoticOrder because: (no reason given)



posted on Apr, 4 2016 @ 05:58 AM
link   
Meanwhile whoever programmed this is walking away slowly and whistling casually. I suppose it was more aggressive than a penny here and a penny there. But yeah, blame the bot.



posted on Apr, 4 2016 @ 12:04 PM
link   
a reply to: ChaoticOrder

A post that has gotten so many things wrong.

First, the algorithm learns in the same way that humans do with trial and error. It's simply knows that it needs to maximize the screen on the score. The algorithm is free to interpret how it will reach it's goal based on the domain of information, which is the game. It has no information as to how the game is played or what the controls for the game means. Again, watch the video:



Here's another important video that goes over the paper.



It's no different than a human who gets a homework assignment and they can do several things like cheat or just do the homework which is in their domain of information. These games that deep Q is learning to play is very important and very fascinating. This is because the domain of information in these games is so vast. This is why it was so impressive when it beat Champions at Go. The possible moves in Go are nearly infinite and so there's a rich domain of information that the algorithm has to explore to employ the strategy to win the game.

So the algorithm is free to interpret how to reach the goal based on the domain of information it has which is simply how the game looks based on pixel information. The same thing humans see when we first play a game.

The reason Deep Learning is important and why this can quickly become a danger, is because it's not about a one to one correspondence with human intelligence, it's about the intelligent algorithm. I think you fail to grasp what deep learning means and why people like Musk and Hawking are concerned.

The key here is not some step by step voila moment where intelligent machines act just like humans.

The danger can occur if they create a simple intelligent algorithm that can become smarter as it makes copies of itself. This is why Musk, Hawking and others are concerned and this is why there has been a HUGE FOCUS when it comes to Deep Learning.

You can create a simple intelligent algorithm with the intelligence of a 3rd grader but if that simple algorithm can copy itself with versions that become more intelligent than the initial version of the algorithm then you will have an explosion of intelligence that will eventually have a higher I.Q. than any human that has ever lived.

If this simple intelligent algorithm had access to the internet, it would have a domain of information that would allow it to become smarter than any human in very short order.

This is what you and the other poster fail to understand and I suggest you learn about these algorithms and you will know why they pose such a concern. It just takes a simple intelligent algorithm that can replicate itself and become more intelligent than the initial version of the algorithm.

We're not far off from this and I agree with Musk, Hawking and others that we need to pay closer attention to these things because we're creating machines that are intelligent and they can learn.

It doesn't matter if it has will or not. That may eventually come. What matters is that it's free to interpret how to reach it's goal based on the domain of information it's given. This is the same way humans learn.

Here's an example. Say you give it the goal of killing a terrorist. You give it all the information about the terrorist. It then learns the terrorist was almost killed 5 years ago when he came out in the open after a relative was killed. The intelligent algorithm then targets and kills the terrorists family in order to draw out and kill it's target which is it's only goal.

This is the point, you can have a superintelligence that doesn't know what it's doing. It's just blindly trying to reach it's goal. It's like the Terminator so to speak. It's intelligent but it can't be reasoned with because it isn't even aware it's intelligent. This is a clear and present danger because all it takes is a simple intelligent algorithm that can copy itself and each copy is more intelligent than the original algorithm.



posted on Apr, 4 2016 @ 02:40 PM
link   

originally posted by: neoholographic
a reply to: mbkennel

Again, what you're saying makes no sense. In most cases Humans don't read a strategy book and then apply that immediately. You're not talking about intelligence at all. You said:

It is possible to know the structure of the learning algorithm and goal as they were programmed by humans. A more natural intelligence could read a strategy book and then apply that immediately----DeepMind type things would need to play millions of games and stumble upon the strategy by chance.

What you're talking about has NOTHING to do with intelligence. Humans don't read a strategy and immediately know how to apply it. It takes them time to go over things in their head the same way that deep learning continues to go over the information.


In a different way, not the same way. Of course humans need empirical practice as well.


The advantage that deep learning has it can go over these things millions of times in short order where it takes humans longer. This is why machine intellige is already better than human intelligence in some areas.


My point is that the "these things" that the machine intelligence goes over millions of times is not reading a strategy book but playing far more games (which have quantifiable outcomes) than a human could. Deep mind doesn't have any meta-knowledge---ability to self-reflect into its own cognitive strategies that a human might. (Humans do not in many perceptual areas like image segmentation which is handled below conscious levels in specialized neural hardware---an ability which has now been cracked by the best artificial neural networks, in a major achievement).

The AI's have isolated some limited forms of cognitive strategies, and then use the abilities of the data processing computer---higher speeds and unlimited ability to repeat and focus---to get human or superhuman capability in those limited areas.

Suppose you took another performance metric: ability vs number of games played for training. Humans would far exceed computers in the initial stages. And in the natural evolved world of survival, there isn't the ability to repeat exact scenarios over and over like in the computer reinforcement learning. So yes, the current algorithms and setups do perform remarkably well, but they have advantages and performance limitations.

My point is that the breadth of natural human intelligence also includes abilities which the progress so far in artifical replication has been weak.



posted on Apr, 4 2016 @ 08:15 PM
link   
a reply to: neoholographic


It's simply knows that it needs to maximize the screen on the score. The algorithm is free to interpret how it will reach it's goal based on the domain of information, which is the game. It has no information as to how the game is played or what the controls for the game means.

I never wrote anything to disagree with what you wrote there, so you clearly didn't read what I wrote properly. Like I said, it may very well not know anything about the game, but it still needs to know the score. The video you keep posting even states that fact. The technique is so powerful because it's able to find a solution through trial and error like you said, and it may produce a solution we never even predicted, but it's always producing a solution to a specific problem which the programmers have to define. For any problem you want it to solve, you need to have a way to define success and failure for that problem. We'll use your example of a student given a homework assignment. First the A.I. needs to be able to read and interpret the homework questions before it can even begin to solve them, which requires the ability to read English. And how will the A.I. actually know when it's on the right track or it reaches the wrong answer? It then also needs to have the ability to validate and recognize flaws in its own work. All of these things require a very general type of intelligence.

When a human plays a game for the first time we can easily find the game score because we have a very general type of intelligence which is capable of solving general problems, but all of these algorithms we are talking about need to be told the score because they don't have the general intelligence required to determine the goal of the game on their own. It's really not that impressive when you think about it because trial and error always works when given enough time to discover an answer, that's exactly why evolution works, random mutations some times turn out to be beneficial. The idea that random trial and error can produce solutions we never thought of might seem intimidating, and there's no doubt it's a powerful technique, but the points being made about specialization and generalization still stand. Until A.I. can achieve a more general type of intelligence I'm not really worried about terminators, because without general intelligence we will outsmart and defeat them very easily.


I think you fail to grasp what deep learning means and why people like Musk and Hawking are concerned.

I just explained exactly what deep learning means in technical terms. I have even coded my own deep ANN's in the past. I'm not just making this stuff up off the top of my head, I've done lots of research on different types of ANN's and how they function. I would even argue that a deep convolutional network is not the most powerful type of network, or any type of feed forward network for that matter because it highly restricts the computational networks which are possible. ANN's can be modeled as Turing machines, and the most powerful type of ANN is therefore one which is Turing complete, meaning it can solve any computable problem. This can usually be achieved simply by adding recurrent connections to the network. The human brain has very complex neural structures which are interconnected all over the place and it differs vastly from a deep ANN. Human brains are constantly active and the network is constantly changing, even when we're just watching a game and not playing. A real brain requires an indeterminate amount of time to make an action, where as a feed forward network can make decisions in constant time with a single iteration of the network. Real world problems often don't have a clear solution, and the quality of the answer depends on how much time a person is given to solve it.
edit on 4/4/2016 by ChaoticOrder because: (no reason given)



posted on Apr, 5 2016 @ 02:10 AM
link   

edit on 5-4-2016 by neoholographic because: (no reason given)



posted on Apr, 5 2016 @ 09:03 PM
link   

originally posted by: intrptr
a reply to: neoholographic

Big difference between executing instructions in software, like a computer, and knowing that you know, like a human.

Computers are far faster, have vaster memory banks, execute programs faster, etc. They still don't know they know anything.

You can glorify that calculator all you want. its non sentient, has no feeling, no self awareness and especially empathy. It will do exactly what its been told to.

Not that certain people don't behave that way also, despite being good minions, they are also aware of their choices.


Not sure if you realize that you don't know what you're talking about. We do not have free will. EVERYTHING we do was already decided. Neuroscience is not sure at this point what the numerous processes are, but "making choices" aka "free will" is an illusion.



posted on Apr, 5 2016 @ 09:46 PM
link   
a reply to: ChaoticOrder




As long as we don't start giving A.I.'s their own bank accounts we should be fine.



Do you realize that AIs run the stock market???



posted on Apr, 6 2016 @ 12:07 AM
link   
a reply to: neoholographic

Barney Maddox was also going to give the stolen money back.....

Learn from this AI research and stop it now.



posted on Apr, 6 2016 @ 01:06 AM
link   
AI is a concept presently, as there is no real AI. It is SI, or simulated intelligence. Even a game that "learns" , is a game that is programmed from inception to perform a calculated reactionary adjustment of it's behavior from a feedback loop. It is really an adaptation of hysteresis, a well known time-based error correction methodology used in much of our electronic systems. We call them Schmitt triggers.



posted on Apr, 6 2016 @ 03:06 AM
link   
With every bank in the US stealing money from everyone its not something new.

Every month when a bank computes interest if a account comes out with say $400.25 and a half cent or a 1/4 cent the less then a cent always goes to the bank.

With trillions in US bank accounts even small banks make $1000s a month.



posted on Apr, 6 2016 @ 03:26 AM
link   
a reply to: ChaoticOrder

Again, you're not understanding what occurring. I tried to explain it in the previous post and I will say it again.

What you fail to realize is Deep Learning is truly changing things. It has moved things forward by leaps when it comes to A.I. People are not talking about a one to one correspondence with human intelligence. In some ways it will be better than human intelligence because it will not have to deal with human problems.

Like I said, all it takes is a simple intelligent algorithm that can replicate itself and the subsequent copies are more intelligent then the initial algorithm. You could start with a simple intelligent algorithm with the intelligence of a 3rd grader and you will have an explosion of intelligence as the algorithm replicates itself. It only needs one goal and that's to replicate itself.

Researchers in this are are looking for one algorithm that will learn all of these things. So the same algorithm that learns how to play Atari, will also learn how to do heart surgery or learn Algebra. This is a universal learning algorithm or what researchers call the Master Algorithm. There's a good book out about this.



www.amazon.com...

Here's more on the Master Algorithm:


Learning algorithms rule the world. They monitor our credit card use as we wander through life and they decide which of our transactions might be fraudulent; they trawl our email and messages for clues as to which pop-up advertisements we are likely to respond positively to; they watch the way we walk around urban areas and stations to calculate whether we represent a terrorist threat; and more generally, they support the infrastructure of society and business.

US-based computer scientist Pedro Domingos is a man with a quest, and a hypothesis, which is likely – one day – to change the world. He proposes that “All knowledge – past, present and future – can be derived from data by a single, universal learning algorithm”, the Master Algorithm of this book’s title. This algorithm could be the most momentous development in human history, but it could also be the last, potentially opening the door to a stultifying environment where all real power and reasoning has been handed to a technical construct that we are no longer capable of understanding.


www.timeshighereducation.com...

Eventually, they may gain awareness but they don't have to be in order to be intelligent and that's the concern. The Master Algorithm will be superintelligent and it will be able to mimic self awareness if it has to.

The Author of the book was asked this question and said the exact thing that I'm saying.


Lots of plot lines have been built around sentient computers that go awry or take over the world or do harm. Is this something to worry about, or are there other potential dangers?

PD: “The Terminator” scenario of an evil AI deciding to take over the world and exterminate humanity is not really something to take seriously. It’s based on confusing being intelligent with being human, when in fact the two are very different things. The robots in the movies are always humans in disguise, but real robots aren’t. Computers could be infinitely intelligent and not pose any danger to us, provided we set the goals and all they do is figure out how to achieve them — like curing cancer.

On the other hand, computers can easily make serious mistakes by not understanding what we asked them to do or by not knowing enough about the real world, like the proverbial sorcerer’s apprentice. The cure for that is to make them more intelligent. People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.


www.washington.edu...

This is exactly what I've been talking about. It's what 's called dumb A.I. You keep confusing this with the thinking that A.I. has to have will or evil intent in order to be a danger.

The real problem, is when their isn't any intent or awareness and the A.I. doesn't fully understand the goal that's set but still tries to reach a goal it thinks was set or we don't fully understand why A.I. made these decisions to reach it's goal. Here's more:

The Terminator could become REAL: Intelligent AI robots capable of DESTROYING mankind


Dr Amnon Eden said more needs to be done to look at the risks of continuing towards an AI world.

He warned that we were getting close to the point of no return in terms of AI, without a proper understanding of the consequences.

Dr Eden's stance comes after Oxford Professor Nick Bostrom said that super intelligence AI may “advance to a point where its goals are not compatible with that of humans”.

"The result of the singularity may not be that AI is being malicious but it may well be that it is not able to ‘think outside its box’ and has no conception of human morals.

He said: "This is based on machine intelligence entering into a runaway reaction of self-improvement cycles, each one being faster than the last and at some point we will not be able to stop it doing so.


Again, exactly what I have been saying.

A lot of people have a popcorn view of Artificial Intelligence. They think AI means the little boy from the movie A.I. A.I. doesn't have to be aware in order to be dangerous. In fact, because it isn't aware that makes it more dangerous because you will have a superintelligence that's blindly seeking to accomplish it's goal without any comprehension of human morals. So if it has to kill 2 billion people to reach it's goal, it will kill 2 billion people. It will not have a debate about the morality of killing 2 billion people because it's blindly trying to reach it's goal.



posted on Apr, 6 2016 @ 06:03 AM
link   
a reply to: TruthxIsxInxThexMist

It's probably so the government can take your money. New taxes come right out of your account.



posted on Apr, 6 2016 @ 07:27 AM
link   
a reply to: neoholographic




What you fail to realize is Deep Learning is truly changing things. It has moved things forward by leaps when it comes to A.I. People are not talking about a one to one correspondence with human intelligence. In some ways it will be better than human intelligence because it will not have to deal with human problems. Like I said, all it takes is a simple intelligent algorithm that can replicate itself and the subsequent copies are more intelligent then the initial algorithm. You could start with a simple intelligent algorithm with the intelligence of a 3rd grader and you will have an explosion of intelligence as the algorithm replicates itself. It only needs one goal and that's to replicate itself.


This is all very true. However, humans should strive to maintain superiority. A self-replicating algo has horrific implications. Smarter, better, faster, solve problems - that's fine. But self-replicating? That's a whole new ball game.
The Asilomar Guidelines would be impossible to carry out.

However, I think there is a way around the problem - have a dual project to translate the AI code into DNA code. Sounds crazy and is a long way off, but it's doable eventually - converting one code into another. If no one works on it, nothing will get done.



posted on Apr, 6 2016 @ 09:14 AM
link   

originally posted by: AllIsOne

originally posted by: intrptr
a reply to: neoholographic

Big difference between executing instructions in software, like a computer, and knowing that you know, like a human.

Computers are far faster, have vaster memory banks, execute programs faster, etc. They still don't know they know anything.

You can glorify that calculator all you want. its non sentient, has no feeling, no self awareness and especially empathy. It will do exactly what its been told to.

Not that certain people don't behave that way also, despite being good minions, they are also aware of their choices.


Not sure if you realize that you don't know what you're talking about. We do not have free will. EVERYTHING we do was already decided. Neuroscience is not sure at this point what the numerous processes are, but "making choices" aka "free will" is an illusion.

Someone made you decide to type that response to me?



posted on Apr, 6 2016 @ 05:36 PM
link   

originally posted by: AllIsOne
a reply to: ChaoticOrder




As long as we don't start giving A.I.'s their own bank accounts we should be fine.



Do you realize that AIs run the stock market???

That's a good point and it can be dangerous but since most trading bots are coded differently we generally don't have to worry about a large number of bots going off the rails simultaneously, although that has happened at least once in the past. But usually the owner of the bot will have to take the loss if their bot makes a bad trade so the risk is typically quite isolated. Not only that but trading bots generally reduce volatility because they reduce the spread and that keeps markets stable.
edit on 6/4/2016 by ChaoticOrder because: (no reason given)



posted on Apr, 6 2016 @ 05:42 PM
link   

originally posted by: intrptr

originally posted by: AllIsOne

originally posted by: intrptr
a reply to: neoholographic

Big difference between executing instructions in software, like a computer, and knowing that you know, like a human.

Computers are far faster, have vaster memory banks, execute programs faster, etc. They still don't know they know anything.

You can glorify that calculator all you want. its non sentient, has no feeling, no self awareness and especially empathy. It will do exactly what its been told to.

Not that certain people don't behave that way also, despite being good minions, they are also aware of their choices.


Not sure if you realize that you don't know what you're talking about. We do not have free will. EVERYTHING we do was already decided. Neuroscience is not sure at this point what the numerous processes are, but "making choices" aka "free will" is an illusion.

Someone made you decide to type that response to me?


Very funny response! Made me laugh! But what you fail to realize is that what I typed to you and what you typed to me is all responses to stimuli and "we" are not in control of the output. You may not believe me and that's fine, but "the self" doesn't exist the way you "think" it exists. I just responded to you because you assume that our way of processing stimuli is so different from machines, but it probably is not. We are just a few million years more advanced, and we are very good at tricking ourselves ;-)

edit on 6-4-2016 by AllIsOne because: spelling



posted on Apr, 6 2016 @ 06:15 PM
link   
a reply to: AllIsOne


But what you fail to realize is that what I typed to you and what you typed to me is all responses to stimuli and "we" are not in control of the output.

Disclaimer noted. But really, all you can do is speak for yourself about what control you have. I'm over here, I'm not "we". I may disagree.


You may not believe me and that's fine, but "the self" doesn't exist the way you "think" it exists.

Agreed. I have a hard time seeing the real me. Unlike a computer program though, I have a response when I see my self in the mirror. I am aware of myself. I am self aware.

Thats the line from Terminator, Sky Net becomes "self aware"… and kills everyone. Computers can't just 'become'… something else. They are 'locked in'. I guess many people are Locked into linear thinking, unaware of what they do.


I just responded to you because you assume that our way of processing stimuli is so different from machines, but it probably is not. We are just a few million years more advanced, and we are very good at tricking ourselves ;-)

If you mean a few million years advanced by design, I tend to agree. The perfect computer is a life form. Thats what the programmers are intending to replicate is life. And yes, we are in denial.



new topics

top topics



 
20
<< 1  2    4 >>

log in

join