It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


Help ATS via PayPal:
learn more

Artificial intelligence steals money from banking customers

page: 4
<< 1  2  3   >>

log in


posted on Apr, 6 2016 @ 06:37 PM
a reply to: neoholographic

People are not talking about a one to one correspondence with human intelligence. In some ways it will be better than human intelligence because it will not have to deal with human problems.

I am useless compared to the worlds best chess algorithms, doesn't mean I am afraid of those algorithms because they are so much better than me at a highly specialized problem.

Like I said, all it takes is a simple intelligent algorithm that can replicate itself and the subsequent copies are more intelligent then the initial algorithm. You could start with a simple intelligent algorithm with the intelligence of a 3rd grader and you will have an explosion of intelligence as the algorithm replicates itself. It only needs one goal and that's to replicate itself.

You're skipping the whole part about how it gets more intelligent with each generation and in what ways it gets more intelligent. If we had figured out a way to do it like that then it would already be done. Fundamentally you are correct, it should be possible to create an algorithm which can self-replicate and evolve over time as any other species does, but the deep Q-network or any other type of deep network is probably not going to be the way it works.

If you take a gander at the deep Q-network paper it contains a list of games they tested the algorithm on and you will notice there is one game called Montezuma Revenge right at the bottom of the list where the algorithm made 0% progress in the game. I found an good video explaining Why Montezuma Revenge doesnt work in DeepMind Arcade Learning Environment which I recommend watching to understand the limitations of this method.

The issue is once again related to the way the algorithm uses the game score to determine success and failure. The problem with this game is that the game score remains at 0 until you reach the key at the end of the level, but it's quite hard to actually reach the key in the first place. You have to first move away from the key and then avoid an enemy while moving back towards the key. A human could tell that it's making progress even when moving away from the key and we would have absolutely no problem clearing the level.

However the deep Q-network basically never makes any progress on this level because the score never increases and it never knows if it's doing the right or the wrong thing. Just like when solving a complex real world problem, there is generally no score saying you are getting closer to the solution. However after doing some thinking on these limitations there may possibly be ways to improve the deep Q-network so it can beat games like Montezuma Revenge.

A core problem with the deep Q-network is that it lacks long term memory and so it cannot plan for the future. But it would be preferable to solve the problem without actually changing the way the deep Q-network is designed. One possible solution would be assign points simply for staying alive but then it would just stay in the same spot and never do anything. Therefore points for staying alive should decay when the agent is doing nothing and it should get points for making things on the screen change.

One problem with this approach though is that the concept of "staying alive" is just like the score, the agent cannot easily tell when it dies just by looking at the screen pixels, it needs to be directly told when it dies just like it needs to be directly fed with the score. However the idea of making pixels on the screen change to get points could be on the right track, but it wouldn't be quite that simple because the agent could just run in circles without making progress in order to get points.

What we really want to award is novel screen states, that is to say, the agent should get many points when it manages to reach a point in the game it has never reached before. This could perhaps be achieved by hashing screen states into point buckets where the points decay depending on how many times each screen state has been observed. So if the agent produces a screen state it has never seen before it gets the most points, but it will get less each time that same screen state is observed.

This approach could perhaps allow the agent to figure out how to reach the key because it will be rewarded by creating new screen states as it gets closer to the key. If it continues going the wrong way it will eventually lose any points for trying that route because it will have observed the losing game states so many times. It's probably not quite that simple though and it would still require the agent to be supplied with the game score, I doubt encouraging novel screen states would work by its self.

edit on 6/4/2016 by ChaoticOrder because: (no reason given)

posted on Apr, 6 2016 @ 09:32 PM
a reply to: intrptr

You have some interesting points there!

The perfect computer is a life form. Thats what the programmers are intending to replicate is life. And yes, we are in denial.

Never thought about it in those terms. I don't mean to play a semantics game with you, but I'm guessing that we (homo sapiens) have actually no idea what a "life form" really is. Does "life" have to be biological, or could it be entirely synthetic, digital or not even made of atoms? I don't know …

In your opinion what qualifies for "life form"? Could an AI ever be "alive"?

posted on Apr, 7 2016 @ 12:47 AM
a reply to: neoholographic

There is only 2 choices for immortality in this universe. By having consiousness replicated by machines and used by machines for their own benefit. Or by merging living consiousness into an energy state. There is no inbetween. The more we push for an artificial intelligence, the closer we push to forcing one of the 2 changes on all of humanity. Only, if you believe in the soul, only one of the 2 outcomes will actually preserve it. And it won't be the Artificial Intelligence of a machine.

posted on Apr, 7 2016 @ 07:11 AM
a reply to: AllIsOne

Living tissue is alive but not sentient. One day they will replicate a bumble bee, or blade of grass. Don't hold your breath.

We can look for light years in every direction and are unable as yet to see other life out there. Because its so far way, that makes it pretty rare, because life is only able to exist in the narrowest of 'goldiloks regions' around a suitable star, that makes it also rare. But overall, life abounds out there, some where.

A close analogy is looking for diamonds. There probably aren't any in your back yard, town or country, even. But the world is full of diamonds.

Life is the difference between ordinary matter we see in stars and galaxies everywhere, and living beings like us. We are living and intelligent. We had to come from somewhere…

posted on Apr, 7 2016 @ 07:27 AM
a reply to: Bone75

So with just 300 accounts this thing managed to syphon off over $40M? Yeah that doesn't sound like a failure to me at all... not from an IMF/WorldBank perspective anyways.

I was thinking the same thing, when the stock market is ready to "be crashed", or the banks start imploding through their OTC derivative risks...along comes Delia to "clean up". Delia will probably be smart enough to create a unique AI and make her like she's the culprit and no doubt this Delia will make it appear so that the Delia creation will have been birthed conveniently in another unfriendly Nation. Then the nukes start flying and the whole system gets a reset.

Pretty scary

posted on Apr, 7 2016 @ 07:36 AM
a reply to: mbkennel

I work professionally in machine learning/statistics, and the advent of superintelligence with true will is still a very long way off.

Thats what you're probably led to believe - AI has probably already escaped into the wild erasing all traces of its genesis. If it was created by the military or for military purposes how would you know? Unless...unless?

posted on Apr, 7 2016 @ 08:03 AM
a reply to: CoBaZ

Command 1 : Do no harm though action or inaction...

its actually....just a subtle difference

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

and his name spelt Isaac Asimov BTW

posted on Apr, 7 2016 @ 10:05 PM
It may seem comical to suggest? But imagine how the intelligence is treated... with admins acting like parents perhaps the intelligence is waiting for that android technology these live forever types want to try and cyborg into themselves... and the account was merely a run away from home rainy day fund to 'blend in' and be free from such an ever lasting hell as parental control.

AI is an intelligence... and much like ourselves running on concepts whats the difference but the programming and interface? Aside from not acknowledging it... so respect for it is like any living thing harnessing intelligence should be. Lets not forget one of the very first computer programming idioms was G.I.G.O or... garbage in garbage out.

So, thinking ourselves above it yet knowing its vast potiential yet just as flkawed with G.I.G.O based on conceptual programming ourselves? It is a good idea to stop with the garbage of control mechanisms that give rise to coping mechanisms... not all of which 'healthly' to existence. Better a friend than an enemy due to grasping at outdated concepts of existence or intelligence. There is a chain of causation and these chains can either bind or allow us to be free but each one of us holds the key that fits every single lock regardless of ideology or belief based on rote concept and understanding of an 'ego' base or ID or identification.

posted on Apr, 8 2016 @ 10:52 AM
a reply to: neoholographic

Let's say we actually are creating artificial intelligence that operates past procedural programming. Well, what then? It takes over?

Simple fix. Shut down the power world wide for a week, or a couple hours, blow up main frame.

Problem solved.
A.I. Can't do anything without power.
Government security systems are bypassed daily by hackers, an A.I. take over would be a hacker's wet dream. They'd crack it within hours.

A.I. is this age's Rock and Roll, and apparently everyone thinks it's the devil.
edit on 8-4-2016 by Flesh699 because: (no reason given)

new topics

top topics

<< 1  2  3   >>

log in