It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

How algorithms secretly run the world

page: 5
17
<< 2  3  4   >>

log in

join
share:

posted on Feb, 19 2017 @ 09:18 PM
link   
a reply to: neoholographic

I'll play your game.

Your post is a total misunderstanding of the facts.



You talk about brute force, but again that's just nonsense. The only reason why this matters is because of computing power and as I pointed out in the last published article posted that was misrepresented, the Researchers tried to figure out ways to reduce this. That doesn't really matter though because quantum computing will throw that problem out of the window.


First off, full quantum computers are probably a few decades away from mainstream use. The best one has 9 qubits entangled, and it is literally the size of a room. D-Wave's newest has 2048 flux qubits, under quantum annealing... which solves some quantum problems but not others (this one is only a fraction of a room in size). Yes, they will throw certain classes of problems out the window, but only a handful (granted, a handful of classes will be several thousand specific problems). But quantum computers won't be able to handle many of those problems until they are to scale, decades from now. So we have a quite a few years (assuming continually successful innovation) until that happens. I know you didn't specify a timeframe, but "decades" isn't exactly "next year". As a side note, Peter Shor is awesome.

Your own article describes the "brute force" used, which you deny:
spectrum.ieee.org...



First, the AI’s algorithms computed a strategy before the tournament by running for 15 million processor-core hours on a new supercomputer called Bridges.
Second, the AI would perform “end-game solving” during each hand to precisely calculate how much it could afford to risk in the third and fourth betting rounds (the “turn” and “river” rounds in poker parlance). Sandholm credits the end-game solver algorithms as contributing the most to the AI victory. The poker pros noticed Libratus taking longer to compute during these rounds and realized that the AI was especially dangerous in the final rounds, but their “bet big early” counter strategy was ineffective.
Third, Libratus ran background computations during each night of the tournament so that it could fix holes in its overall strategy. That meant Libratus was steadily improving its overall level of play and minimizing the ways that its human opponents could exploit its mistakes. It even prioritized fixes based on whether or not its human opponents had noticed and exploited those holes. By comparison, the human poker pros were able to consistently exploit strategic holes in the 2015 tournament against the predecessor AI called Claudico.


That is how you "brute force"!... and create neural networks... which they did both.

From: www.cs.cmu.edu...


however, we developed a technique for computing the joint distributions that requires just O(n) strategy table lookups.


From: www.dictionary.com...


A primitive programming style in which the programmer relies on the computer's processing power instead of using his own intelligence to simplify the problem, often ignoring problems of scale and applying naive methods suited to small problems directly to large ones.


I know what you're going to argue next... "But it does use its own intelligence!" Actually it doesn't use intelligence for a specific part ... the LOOKUP. That's why it's called a table lookup... because it's just a table of stored value that was pre-computed via brute force. And that information is directly from the creator of the Poker AI! Now, the neural networks it's using is intelligent, but a significant part of the "winning endgame strategy" was essentially "memorized" table values. This is exactly how Chess and Go AIs work. They use optimized solutions (that are stored in memory) to win a "mini-game" within the larger game. They don't just calculate it all on the fly, nor do they just use the "neural network" to get a solution.

I'd like to add one more factor... it was programmed by a handful of Carnegie Mellon's human geniuses. So, in essence, you're playing against a super computer with literally millions of hours of pre-computation, active computation during gameplay, nightly computation to recalculate strategies used by specific opponents (while the humans slept), and utilizing top strategies programmed in by some of the smartest people in the world. And it's quite literally, "Many verses one".

I'm fine with this. I like that they are making AIs that actively re-evaluate strategy and learning new strategies to bluff. I even said it was good in a previous post. But it is most certainly brute force. That's what First describes... with 15 million hours, where it memorizes common gameplay to statistically guarantee wins, no matter how much an opponent might try to bluff. That's just good programming... but it utilizes "brute force".

-continued-
edit on 2017-2-19 by Protector because: updated to match another post




posted on Feb, 19 2017 @ 09:37 PM
link   
a reply to: neoholographic

-part 2-



Even more important, the victory demonstrates how AI has likely surpassed the best humans at doing strategic reasoning in “imperfect information” games such as poker.


Of course they will! This has been happening for decades. Imperfect information (in this instance) is just computing statistical outcomes from a finite number of outcomes (not all of which can be readily computed). Computers better be superior! Or why else would we use them???



The system is learning and it's not Poker specific. As I mentioned earlier, this is one of the goals of A.I. To make intelligent systems that can learn across all areas. It came up with it's own strategy and the only input was the rules of Poker. The Programmers didn't tell it what strategy to use or which strategy to learn in a game where they have to bluff because it has imperfect information. There goes your brute force. How can it be simply calculating information from it's environment when it doesn't have all of the information? It's not even algorithms that are poker specific.


That is about 50% wrong. You're slightly misunderstanding how it works. The programmers DID TELL IT WHAT STRATEGIES TO USE... but not WHEN to use them. They even specify in your article that most of the advantage came from the Endgame strategic algorithms. These algorithms were developed by HUMANS over the last decade. They are details in this paper:
www.cs.cmu.edu...

These algorithms are mathematical in nature and can be applied to many different games that fit the parameters of the algorithms (again, developed by humans). Poker is not the only game that fits these algorithms, but it is the specific test case for them in the paper.

I brought up TensorFlow in an earlier post. It is a Google library where they program in strategies used by all different software platforms for solving AI problems. It is a common set of mathematics and programming shortcuts for solving general AI problems. This Poker bot still has a ton of Poker specific programming. When the Poker bot needs to generate a neural network, it doesn't need specific Poker programming. You can use generic training techniques. This is what major software like TensorFlow does. That's the part you seem to believe is running EVERYTHING. It's not. It's just a piece--albeit, an important piece.

Specifically you said, "the only input was the rules of Poker". No. I have thoroughly disproved this. You can't just type in rules from a rule book and then the AI just "gets it". I assume you know that and you were over-simplifying your statement.

The AI is not developing most of the strategies, but it is determining when to apply them. The neural network, which is just part of the AI, will develop a set of strategies to solve the 120,000 hands in whatever way the programmers told the AI to optimize for. However, that's just one aspect of the program and wasn't attributed with beating the human players... the endgame strategy was... the part where brute force pre-computed solutions were applied, by the AI, to match the current gameplay.

Again, this Poker bot is pretty cool. I don't think it represents a huge leap forward. It is really just a great mashup of gaming strategies and computational optimizations. But these strategies are a bit domain specific (a domain of games). Some of the techniques are old-school. Some are new. But it is "theory in practice". That's why I like it. This isn't a leap forward where AI is programming itself and coming up with never-before-seen strategies of its own imagination. But that will happen, someday in that not-too-far future. Just not yet.



posted on Feb, 20 2017 @ 12:23 AM
link   
a reply to: Protector

Excellent description. I read the article, and though I understand ML and statistics reasonably well (it is my day job), the techniques in that paper were quite unfamiliar to me, it's definitely specialized to certain kinds of games. The Alpha Go achievement was more impressive in using more general purpose methods, but there was still a large amount of Go knowledge and sim-games used to train its nets---and still there were stochastic game-tree searches during play.

What I think is actually amazing: that a few expert humans can come close to the level of performance versus zillions of pre-computed (though abstracted/approximated) solutions of a game, when their entire lifetime has never allowed them the ability to play anywhere near as many hands of poker as the virtual bot. And these humans can go home and read a nice story to their family and then shoot pool with their buds.

AI will be really impressive when it could read this paper, and the papers it references, and then write a program for tensorflow to implement it, and understanding when it doesn't understand something enough to ask somebody else. That's very, very, far off.



posted on Feb, 20 2017 @ 12:31 AM
link   
a reply to: Protector

What?

You still don't understand the paper that you quoted. You need to take time to actually understand what they're talking about. You said:

That is how you "brute force"!... and create neural networks... which they did both.

This makes no sense. You're just typing large amounts of nonsense that refutes nothing. There was no brute force, it was just a lot of processing power. The system learned strategies it was not programmed to make these strategies. This has nothing to do with brute force, it just took a lot of computing power to do these things.

Again, THIS HAS NOTHING TO DO WITH BRUTE FORCE. You sound asinine. You said:

That's just good programming... but it utilizes "brute force".

Again, you have no clue as to what you're talking about. This has nothing to do with good programming. There was no programming done. The A.I. picked this strategy. It was just given the rules to Poker.


Even more important, the victory demonstrates how AI has likely surpassed the best humans at doing strategic reasoning in “imperfect information” games such as poker. The no-limit Texas Hold’em version of poker is a good example of an imperfect information game because players must deal with the uncertainty of two hidden cards and unrestricted bet sizes. An AI that performs well at no-limit Texas Hold’em could also potentially tackle real-world problems with similar levels of uncertainty.

“The algorithms we used are not poker specific,” Sandholm explains. “They take as input the rules of the game and output strategy.”

In other words, the Libratus algorithms can take the “rules” of any imperfect-information game or scenario and then come up with its own strategy. For example, the Carnegie Mellon team hopes its AI could design drugs to counter viruses that evolve resistance to certain treatments, or perform automated business negotiations. It could also power applications in cybersecurity, military robotic systems, or finance.


spectrum.ieee.org...

Again, you hear the word programming and you lose your mind. Of course they're programmed to learn but the behavior they learn from the information in their environment isn't programmed. This is learning.

Humans are programmed. We go to school, we learn from our environment. It's the same thing these systems are doing. You said:

Specifically you said, "the only input was the rules of Poker". No. I have thoroughly disproved this. You can't just type in rules from a rule book and then the AI just "gets it".

This is just asinine. This is from the mouth of the creator of Libratus.

“The algorithms we used are not poker specific,” Sandholm explains. “They take as input the rules of the game and output strategy.”

For some reason, you're under the impression that people are calculating and programming these things when it's the intelligent system that learns these things.

SHOW ME WHERE IT SAYS THE STRATEGY WAS CALCULATED BY A PROGRAMMER!

The "brute force" has nothing to do with what strategies it learns but how much computing power it uses. This is why I keep saying you and your friends need to actually read instead of post because you're debating against something that has nothing to do with A.I.

It's a huge leap forward if you understand anything about A.I. research.

This is what was said about computing power.


There is some good news for anyone who enjoys playing—and winning—at poker. Libratus still required serious supercomputer hardware to perform its calculations and improve its play each night, said Noam Brown, a Ph.D. student in computer science at Carnegie Mellon University who worked with Sandholm on Libratus. Brown reassured the Twitch chat that invincible poker-playing bots probably would not be flooding online poker play anytime soon.


spectrum.ieee.org...

Again, the system came up with the strategy, nobody told it which strategy to play or how to play it. This was learned by the intelligent sysytem. You said:

The AI is not developing most of the strategies, but it is determining when to apply them. The neural network, which is just part of the AI, will develop a set of strategies to solve the 120,000 hands in whatever way the programmers told the AI to optimize for.

What? The A.I. develops the strategy period. There's no programming as to what strategies to play. It's not just determining when to apply them, it's learning the strategy. It comes up with the strategy. This is why I said, you don't know what you're talking about.

You talked about solving 120,000 hands. Again, this is nonsense. It played 120,000 hands against the pros:

Libratus lived up to its “balanced but forceful” Latin name by becoming the first AI to beat professional poker players at heads-up, no-limit Texas Hold'em. The tournament was held at the Rivers Casino in Pittsburgh from 11–30 January. Developed by Carnegie Mellon University, the AI won the “Brains vs. Artificial Intelligence” tournament against four poker pros by US $1,766,250 in chips over 120,000 hands (games). Researchers can now say that the victory margin was large enough to count as a statistically significant win, meaning that they could be at least 99.98 percent sure that the AI victory was not due to chance.

The programmers had nothing to do with how the games were played. The system came up with it's own strategy.

The system had to learn how to play poker by figuring out what strategies worked best through reinforcement learning. Nobody programmed this, the system had to learn through experience

SHOW ME WHERE THE PROGRAMMERS TAUGHT THE SYSTEM HOW TO PLAY POKER AND WHERE THE PROGRAMMERS TAUGHT THE SYSTEM WHICH STRATEGIES TO USE.

This clearly shows your lack of understanding.

At first, the system learns how to play by playing trillions of hands against itself. In the very first game, it doesn't know how to play the game of Poker. Bet, fold, raise, check, call all mean the same thing. The system learns as it plays what these things mean and when to use these things as a strategy to win more money.

The system then tries to predict what cards it's human opponent has based on those trillions of hands. Whenever a move is made that doesn't fit predictions, that new information is folded into the strategy. Again, there's nobody programming the system as to how to play poker or what strategies to use, it's just given the rules of the game.
edit on 20-2-2017 by neoholographic because: (no reason given)



posted on Feb, 20 2017 @ 12:43 AM
link   
It's really simple because it seems some people on this thread don't understand what words like intelligence and learning mean:

SHOW ME WHERE THE PROGRAMMERS TAUGHT THE SYSTEM HOW TO PLAY POKER AND WHERE THE PROGRAMMERS TAUGHT THE SYSTEM WHICH STRATEGIES TO USE.

The A.I. had to learn to play poker and it had to learn strategies to beat it's human opponent. It was said:

The AI is not developing most of the strategies, but it is determining when to apply them.

THIS IS JUST PURE NONSENSE!

The system was just given the rules of poker and the output strategy used wasn't programmed it was determined by the intelligent system.

“The algorithms we used are not poker specific,” Sandholm explains. “They take as input the rules of the game and output strategy.”

Where does it say that how to play poker was programmed or what strategy to use was programmed?

I'll answer the question, NOWHERE!



posted on Feb, 20 2017 @ 03:01 AM
link   
Here's an interesting video about Libratus with the Poker players and the guy who developed the system.



A couple of key points. First Protector and others on this thread keep talking about "brute force" which makes no sense. Tuomas Sandholm, a computer scientist at Carnegie Mellon University, says at around 5:10 that it's not about BRUTE FORCE because there's 10^160 situations that the player can face in this game. That's 1 followed by 160 zeroes which is more than the number of atoms in the universe which is estimated between 10^78 to 10^82. So it CAN'T use brute force, it has to learn.

He also says that the system was just given the rules of poker but the system had to come up with the strategy. This is around 5:40.

So all of this NONSENSE about brute force is just NONSENSE. Like I said, the A.I. had to learn how to play and it chose which strategies to use. These things were not programmed and it had to do it with incomplete information.
edit on 20-2-2017 by neoholographic because: (no reason given)



posted on Feb, 20 2017 @ 10:41 PM
link   

originally posted by: Aedaeum
I want to shout out to Azadan and Protector for their insight on AI; it was very illuminating. Azadan, I was wondering if I might be able to pick your brain in PM? I have a project I'm working on that I'd love to have your advice on, though I'm afraid you might not be coming back to this thread haha for obvious reasons.


Sure, I'm still reading the thread.

I'm just not going to get into arguments with people who clearly have no understanding of the subject, and no desire to learn about it.



new topics

top topics



 
17
<< 2  3  4   >>

log in

join