It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Supersmart Robots Will Outnumber Humans Within 30 Years, Says SoftBank CEO

page: 2
6
<< 1   >>

log in

join
share:

posted on Mar, 1 2017 @ 03:32 PM
link   

originally posted by: soficrow
a reply to: neoholographic



...These microchips will be used for many things and this is just the beginning. One cent microchips will be everywhere in 30 years and everything from roads to tennis shoes will have chips in them.



Do you honestly think it will take that long?





Nope, I think we will see some of these things in 5-10 years. In 30 years we will be saturated with A.I. and the internet of things.




posted on Mar, 1 2017 @ 03:39 PM
link   
Yep most people don't realize that the actual intelligence difference between Einstein and someone so retarded they can't tie their shoes is very minimal, maybe like 10% max. These super intelligent computers will be doubling their own intelligence every new development cycle..
edit on 3/1/17 by RedDragon because: (no reason given)



posted on Mar, 2 2017 @ 10:08 PM
link   
a reply to: soficrow

Yes, GPU and other parallelized graphics cards (CUDA-based) are highly desired for A.I. workloads.

I'll quote from Google's TensorFlow installation guide (A.I. software):

From: www.tensorflow.org...

TensorFlow with CPU support only. If your system does not have a NVIDIA CUDA® GPU, you should install this version. Note that TensorFlow with CPU support is typically easier to install than TensorFlow with GPU support. Therefore, even if you have an NVIDIA CUDA GPU, we recommend installing this version first as a diagnostic step just in case you run into problems installing TensorFlow with GPU support.

TensorFlow with GPU support. TensorFlow programs typically run significantly faster on a GPU than on a CPU. Therefore, if your system has a NVIDIA CUDA GPU meeting the prerequisites shown below and you need to run performance-critical applications, you should ultimately install this version.


CUDA at universities: developer.nvidia.com...

Part of the reason for the increase is that major engineering and computer science schools are teaching CUDA courses. I know of other Universities, not listed, who also have staff trained in CUDA who teach graduate level courses in it.

Depending on what you're trying to do, massively parallel algorithms can handle "big data" sets much more efficiently. I've written a few, but parallelism isn't good for most common tasks. Spreading out data sets among many cores/processors inherently loses ordering since you don't know which thread will finish first (known as a race condition). When order doesn't matter, parallelism can be fantastic.

I personally do my A.I. programming with the CPU (takes longer). I don't tend to be at massive desktop machines, nor large servers, very often--at least not ones used for research purposes.



posted on Mar, 2 2017 @ 10:17 PM
link   

originally posted by: RedDragon
Yep most people don't realize that the actual intelligence difference between Einstein and someone so retarded they can't tie their shoes is very minimal, maybe like 10% max. These super intelligent computers will be doubling their own intelligence every new development cycle..


That is such crap. Einstein's IQ was calculated between 160 and 170. You'd probably have to be the area of 40 to have trouble tying your shoes.

Source: www.assessmentpsychology.com...


Very Superior 130 and above
Superior 120-129
High Average* 110-119
Average 90-109
Low Average* 80-89
Borderline 70-79
Extremely Low* ** 69 and below

Mild Mental Retardation IQ 50-55 to approximately 70
Moderate Retardation IQ 35-40 to 50-55
Severe Mental Retardation IQ 20-25 to 35-40
Profound Mental Retardation IQ below 20 or 25


Source: medical-dictionary.thefreedictionary.com...

Mild mental retardation
Approximately 85% of the mentally retarded population is in the mildly retarded category. Their IQ score ranges from 50-75, and they can often acquire academic skills up to the 6th grade level. They can become fairly self-sufficient and in some cases live independently, with community and social support.

Moderate mental retardation
About 10% of the mentally retarded population is considered moderately retarded. Moderately retarded individuals have IQ scores ranging from 35-55. They can carry out work and self-care tasks with moderate supervision. They typically acquire communication skills in childhood and are able to live and function successfully within the community in a supervised environment such as a group home.

Severe mental retardation
About 3-4% of the mentally retarded population is severely retarded. Severely retarded individuals have IQ scores of 20-40. They may master very basic self-care skills and some communication skills. Many severely retarded individuals are able to live in a group home.

Profound mental retardation
Only 1-2% of the mentally retarded population is classified as profoundly retarded. Profoundly retarded individuals have IQ scores under 20-25. They may be able to develop basic self-care and communication skills with appropriate support and training. Their retardation is often caused by an accompanying neurological disorder. The profoundly retarded need a high level of structure and supervision.


Trying Googling it first, next time.



posted on Mar, 2 2017 @ 11:41 PM
link   
originally posted by: neoholographic


You have no clue about deep learning. The intelligent system couldn't use brute force it had to LEARN because it had incomplete information.


Well, I'm a professional software developer who programs in front-end (graphics and interactive _javascript), middleware (linking front-ends to a variety of backend systems and managing massive cached data stores), backend development (a variety of data stores, whether SQL or NoSQL databases, file systems, or remote services), bioinformatics (genetic programming), and A.I. So I think I have a clue.

You don't understand why you're wrong, which I've tried to explain to you already. The "program" is NOT brute force. Pieces inside of the program ARE BRUTE FORCE. I specified this in a separate post to you already, so just go back and read through it slowly. The "end game" portion of this massive program is a separate piece of software that uses pre-computed strategies that are either guaranteed to win, or have an incredibly high probability of winning. This type of pre-computed, bulk processing is one common type of "brute force". It doesn't matter if you agree with me. I've written these programs before.

It is true that the other parts of the program actively calculate strategies and those parts ARE NOT brute force. This computation is mostly (but not entirely) performed during the beginning and middle parts of the game. But the "end game" strategies are guaranteed victory paths that are stored in advanced because they take a massive amount of computation. The upside is that you only have to do that computation once, then store the final output, which can easily be retrieved later from the database. My other post even gives sources and examples of this. I can provide more if you are still confused.


A couple of key points. First Protector keeps talking about "brute force" which makes no sense. Tuomas Sandholm, a computer scientist at Carnegie Mellon University, says at around 5:10 that it's not about BRUTE FORCE because there's 10^160 situations that the player can face in this game... So it CAN'T use brute force, it has to learn.


That is correct. The "program" is not a brute force program. It actively calculates strategies. But remember, this is the first successful program against top-ranked human players. Its success was attributed to the "end game" strategies used. Again, that portion was largely "brute force". This is the same way that the world renown Chess bots were able to win so many times. Once you reach a certain state in game play, you can statistically guarantee a win. That's just part of the gameplay, but it is a vital part. And that part was largely pre-computed by this poker bot.

You should probably take issue with Tuomas Sandholm for over-stating the anti-brute force argument. That's like saying, "My cupcakes don't use food dye!" Then someone else points out that the icing on his cupcakes does contain food dye.

And to point out the "learning" aspect, which you seem to be really interested in... the bot learned in a few phases. The pre-programmed poker strategies were most likely optimized through A.I. learning (these strategies have been developed by the best players over decades). Beginning and mid-game play, also, mostly likely had a large set of learned A.I. strategies to handle common opponent strategies (like bluffing and statistical money distribution per round of play). And, as they pointed out, after playing against an opponent, each night they would run the A.I. again against the opponent's previous hands to better optimize strategies against that player. This probably didn't use any brute force, but was rather optimizing the pre-existing strategy sets that were computed by the A.I. software.

The A.I. programmers were probably slamming coffee to get all of this done each night. They might have preprogrammed many of these features in advance (that is, to quickly run the update feature on the new opponent data), but I assume there was still some manual entry needed to create any missing meta data from the opponent's plays. That's just a guess.


It didn't use brute force, it learned how to play poker and the algorithm wasn't poker specific.


Yes it was poker specific. I already thoroughly debunked this. Stop saying it. The only piece that is reusable is the basic A.I. programming that takes any dataset and optimizes it. That doesn't mean that they didn't teach it winning poker strategies. They did and they pointed it out in their paper. Their statement about reusability was a simple statement. The framework used for training the A.I. was common and reusable, just like TensorFlow, which, again, I pointed out in a previous post.

I understand, you aren't a programmer, so it may not make perfect sense. It is a generic A.I. programming framework that takes ANY set of data and runs a specified set of strategies against the data (specified by the programmers), and runs it over and over and over... etc, and the A.I. continually updates a set of parameters to optimize the output to favor the fastest desired strategies. These strategies could be for optimized opening play, mid-game play, end game play, bluffing, money waging, etc. The programmer specifies what a "success" is in any portion of the game play. The A.I. then optimizes to achieve that success as fast as possible, and to achieve that success during many different styles of play. The "learning software" (framework) doesn't care what it is learning. It has a set of data, and moves through a large number of possible outcomes, and updates (learns) from achieving success. This is what TensorFlow and Theano are (and there are others).

The poker algorithms used were poker specific. The A.I. framework algorithms used were NOT poker specific. TensorFlow is an open source software that anyone can use to create their own chess, poker, go, (anything) bot using a standard library of A.I. training software. That software does whatever you tell it to do, but it is just one phase of the software development of a program of this magnitude. You can't equate the entire poker bot to just the A.I. framework used to create the poker strategies.

To make another cooking analogy, that's like saying that adding an egg to your recipe means that your final recipe IS AN EGG. NO, IT JUST CONTAINS THE CONTENTS OF AN EGG (egg whites and/or yolks). The recipe doesn't work right without the egg, but the egg doesn't encompass the entire meal. I hope that makes sense.

Oh, and "deep learning" is associated with a neural network. This is one common and popular strategy in A.I. development... which I've talked about several times in previous posts to you. If you don't believe me, you can always read the page on wikipedia: en.wikipedia.org...

Deep learning is awesome. I hope more computer scientists pick it up.

And just so you aren't confused by anything above, your statements on 10^160 are correct. 10^160 possible outcomes cannot be brute forced. But once you get down to a limited number of cards/chess pieces/go pieces, there are only "a certain number" of possible outcomes remaining. This is "end game" strategy. It is a combination of Discrete Mathematics and Statistics. Chug through them and store the results. You win!
edit on 2017-3-2 by Protector because: Fixed a bad open tag and typo



posted on Mar, 3 2017 @ 12:58 AM
link   
originally posted by: neoholographic

I feel that I should quickly refute your other ramblings.


Again, these chips are made for a penny and nothing you said refutes that. The technology isn't meant for just 3rd world countries. That's just ignorant. THE MICROCHIP COSTS A PENNY TO MAKE. It doesn't cost a penny because it will be used in third world countries, it costs a penny because that's how much it costs to make the chip and what Dr. Kaku said is right and he knew about Moore's Law in 2003 in fact he talks about it LOL


There is nothing to refute about the "proposed cost"--it is their assumption that these chips will cost about a penny when at full scale. Of course, it assumes you have a piece of silicone and an inkjet printer loaded with a special microfluidic ink, which certainly aren't free... but might be provided at low cost by companies donating materials.

It's a fairly simple chip design (as in 1-layer) that uses electric current to perform microscopic tasks. This thing IS NOT ARTIFICIAL INTELLIGENCE. You linked this "chip" as if it related to the 3 decade timeline for the "IQ 10,000 chips in every device". This chip has NOTHING to do with A.I. And, if it wasn't relevant to A.I., then I don't know why you added it to your post.

Not all circuitry is expensive. You can browse Octopart and find lots of cheap parts. Here's a controller for $0.38:
octopart.com...

I assume that you just aren't aware of the hardware market and are spouting nonsense, again.

AND YES, THE CHIP IS MEANT FOR 3RD WORLD COUNTRIES! Geez! Did you not read anything about the project?!?

Source: med.stanford.edu...

The inexpensive lab-on-a-chip technology has the potential to enhance diagnostic capabilities around the world, especially in developing countries. Due to inferior access to early diagnostics, the survival rate of breast cancer patients is only 40 percent in low-income nations — half the rate of such patients in developed nations. Other lethal diseases, such as malaria, tuberculosis and HIV, also have high incidence and bad patient outcomes in developing countries. Better access to cheap diagnostics could help turn this around, especially as most such equipment costs thousands of dollars.

“Enabling early detection of diseases is one of the greatest opportunities we have for developing effective treatments,” Esfandyarpour said. “Maybe $1 in the U.S. doesn’t count that much, but somewhere in the developing world, it’s a lot of money.”



THE PRODUCTION COSTS IS JUST ONE PENNY.


Wow, you can read a headline! Amaze-ballz!


Here's Dr. Kaku talking about Moore's Law.


Amazing video. Thanks for adding to this thread. I've seen it before. Notice that it is 9 years after his 2003 prediction. He also explains, if you actually watched it, that around 2022 we are going to hit a wall (which we are already coming against) with Moore's Law (that computer power doubles roughly every 18 months). He also said that Intel is going to try 3D circuits, but they're having a hard time because of heat and quantum effects. He proposes that we might use molecular computing, if it's ready by then, but that isn't even close to being ready now. And, wisely, he said quantum computing won't be used until closer to the end of the century. So, Kaku agrees with me. Thanks for proving my point. Moore's Law is coming to an end (at least for a little while).

Also, for completeness, he mentioned photonics. HP is working on integrating fiber optic filaments for data transmission within their chips. Fiber optics are just one aspect of photonics, but it is believed to give a generous payoff in the short-term. This will definitely help, when it's finally ready. But the current gains in photonics, alone, won't overcome the problems that we are facing with silicon chips, which is why Kaku also mentioned it with 3D lithography.


So this makes no sense. These microchips will be used for many things and this is just the beginning. One cent microchips will be everywhere in 30 years and everything from roads to tennis shoes will have chips in them.


OK? These chips aren't intelligent. Lots of things have simple, cheap chips, like these, RIGHT NOW. And still, I don't see how we'll get 117 of these super-intelligent IQ 10,000 chips PER PERSON ON PLANET EARTH. It sounds like nonsense. I pointed it out. For some reason you feel the need to argue about simple chips, as if they relate to A.I. They don't. Stop trying to make that argument.

I swear, with you, 2 + 2 = CHAIR



posted on Mar, 3 2017 @ 09:35 AM
link   
a reply to: Protector

I see what you do now. You type this long winded meaningless posts that refute nothing. You said:

You don't understand why you're wrong, which I've tried to explain to you already. The "program" is NOT brute force. Pieces inside of the program ARE BRUTE FORCE. I specified this in a separate post to you already, so just go back and read through it slowly. The "end game" portion of this massive program is a separate piece of software that uses pre-computed strategies that are either guaranteed to win, or have an incredibly high probability of winning. This type of pre-computed, bulk processing is one common type of "brute force". It doesn't matter if you agree with me. I've written these programs before.

This is all just a flat out lie. There's no brute force involved. No brute force inside the program. You just make these things up as you go. We're talking about deep learning and the program had to learn how to play poker.

For some reason you have no clue as to what this means.

Here's the video of the guy who created the program.



A couple of key points. First Protector keeps talking about "brute force" which makes no sense. Tuomas Sandholm, a computer scientist at Carnegie Mellon University, says at around 5:10 that it's not about BRUTE FORCE because there's 10^160 situations that the player can face in this game... So it CAN'T use brute force, it has to learn.

Deep Learning has NOTHING TO DO WITH BRUTE FORCE CALCULATIONS!

The system had to learn how to play poker. Here's another lie from your diatribe.

Yes it was poker specific. I already thoroughly debunked this. Stop saying it. The only piece that is reusable is the basic A.I. programming that takes any dataset and optimizes it. That doesn't mean that they didn't teach it winning poker strategies.

No, it's not poker specific and no they didn't teach it winning poker strategies just like Deep Mind didn't teach their program how to play Atari.

YOU DON'T UNDERSTAND WHAT THE WORD LEARNING MEANS.

Here's the video from Deep Mind.



The system LEARNS how to play the game. It's not taught how to play the game. It learns how to play the game. THIS HAS NOTHING TO DO WITH BRUTE FORCE CALCULATIONS. You just don't have a clue as to what you're talking about.

Here's more Tuomas Sandholm who CREATED THE INTELLIGENT SYSTEM!

Why should anyone believe a word you're saying over the person who created the system when it's obvious you don't have a clue as to what you're talking about.


Even more important, the victory demonstrates how AI has likely surpassed the best humans at doing strategic reasoning in “imperfect information” games such as poker. The no-limit Texas Hold’em version of poker is a good example of an imperfect information game because players must deal with the uncertainty of two hidden cards and unrestricted bet sizes. An AI that performs well at no-limit Texas Hold’em could also potentially tackle real-world problems with similar levels of uncertainty.

“The algorithms we used are not poker specific,” Sandholm explains. “They take as input the rules of the game and output strategy.”

In other words, the Libratus algorithms can take the “rules” of any imperfect-information game or scenario and then come up with its own strategy. For example, the Carnegie Mellon team hopes its AI could design drugs to counter viruses that evolve resistance to certain treatments, or perform automated business negotiations. It could also power applications in cybersecurity, military robotic systems, or finance.


spectrum.ieee.org...

You're lying.

I don't say that often but in this case it's appropriate. You're flat out lying about what has been said.

You said it's brute force and this is refuted by the the guy who created the system.

You said it was poker specific. Again:

“The algorithms we used are not poker specific,” Sandholm explains. “They take as input the rules of the game and output strategy.”

You said the intelligent system was taught winning poker strategies and this isn't the truth.

You said:

You should probably take issue with Tuomas Sandholm for over-stating the anti-brute force argument.

LOL!!

Are you serious???

I should take issue with the guy who created the system and has explained it but not you?? You don't know what you're talking about.

The system was given the rules of poker and had to come up with it's own strategies. Just like with Deep Mind. The system had to learn how to play the game through reinforcement learning.

You're somehow equating processing power to deep learning. Listen to this. You said:


And just so you aren't confused by anything above, your statements on 10^160 are correct. 10^160 possible outcomes cannot be brute forced. But once you get down to a limited number of cards/chess pieces/go pieces, there are only "a certain number" of possible outcomes remaining. This is "end game" strategy. It is a combination of Discrete Mathematics and Statistics. Chug through them and store the results. You win!


WOW!

This is just a wow moment after your long winded diatribe. This is exactly what I said. This is why there's no brute force and the system learns because there's INCOMPLETE INFORMATION.

SHOW ME WHERE Tuomas Sandholm SAID THEY TAUGHT IT WINNING POKER STRATEGIES.

I don't need a long diatribe with lies. Just show me where Libratus was taught winning poker strategies AS YOU SAID!
edit on 3-3-2017 by neoholographic because: (no reason given)



posted on Mar, 3 2017 @ 07:38 PM
link   
a reply to: neoholographic

Arguing with you is becoming pointless. You just use the same counter arguments over and over, so I have to explain more and more, then you just ignore what I say and use the same argument in your next post.

I spent a couple hours analyzing the paper on Libratus and various supporting research (other algorithms used in Game Theory that specifically work in poker). The truth is, in the end, you're just going to keep saying "prove it", "prove it", "prove it". I don't think you understand enough of the subject matter to mount a counter argument. If I could, I'd open up the Libratus source code and show you specifics. But the code is not available to the public. I can explain several reasons why your assumptions are wrong, but in the end, the code would prove it and I don't have the code. I have no idea if you can even read code, so you might not believe me even if I showed you the code.

Libratus uses about a half dozen strategies from Game Theory to develop the A.I. neural network "success" strategies, where each Game Theory strategy was selected because it works with poker. Two Game Theory strategies are also applied during the end-game portion. I assume you don't understand enough about Game Theory for me to even explain it. It may also use other strategies that aren't listed--the paper does not outline the entire program, only pieces.

In addition, Libratus has 5 internal algorithms developed by the 2 authors related to abstracting out "bucket management" for their end-game approach.

In regards to what is pre-computed, there is a large set of pre-defined table values and pre-defined strategies--these are only partially detailed in the paper. The pre-defined strategies appear to be the output of the A.I. neural network for before-end-game play--it is ambiguous as to whether some of these are used in end-game play. The pre-defined table values are specific to end-game play, as they are used in their "bucket management" end-game algorithms. A bucket is made up of a selection of table values (pre-computed, statistical winning strategies). The "bucket management" system then tests the selection of table values (based on exactly 1081 private hands, according to the text) to determine which one statistically edges out the others, given the opponent's history. In other words, this is where they pick a final winning strategy from a set of possible winnings strategies for the final combination of cards (both in the bot's hand and the possible opponent hands). By doing this final step, they can counter a player's bluff.


Algorithm 2 - Algorithm for computing hand distributions
Inputs: ... number of possible private hands H, betting history of current hand h, array of index conflicts IC[][] ...

In the course of this loop, we also look up the probability that each player would play according to the observed betting history in the precomputed trunk strategies, which we then normalize in accordance with Bayes’ rule.


This end-game is similar to a "dictionary attack". Bayes' rule is just probability distribution.

I assume you don't understand software architecture, so you can't visualize how these pieces interact. The authors' "bucket management" is called from the "poker specific code".

There are 2 areas where the author is stating that the code can be used for a variety of other games:
1. The Game Theory algorithms often work for other games (which games vary greatly per algorithm), such as this: papers.nips.cc...
2. The 5 algorithms for "bucket management" can be reformatted to take other "arrays" of data. The author(s) pointed out that it was specifically written for poker code, but could be easily tweaked to accept other inputs--which is true, the code is very generic.


. The core algorithm is domain independent, although we present the signals as card-playing hands for concreteness.

Algorithm 1 - Algorithm for endgame solving
Inputs: number of information buckets per agent ki, clustering algorithms Ci, equilibrium-finding algorithm Q, number of private hands H, hand rankings R


You seem to have the impression that some black-box A.I. is running everything. I would prove to you that it's not, if I had the code to show you.

Ultimately, we'd go round and round about what is poker specific and what is generalized, but you don't seem to have a background in computer science, nor mathematics, to understand how things are coded, nor the various levels that are abstracted.


Deep Learning has NOTHING TO DO WITH BRUTE FORCE CALCULATIONS!


Correct. As I pointed out before, deep learning is the training of a neural network using specific mathematical algorithms to continually update the optimal path that a program uses to solve a particular problem, thereby reaching a "success" condition.


No, it's not poker specific and no they didn't teach it winning poker strategies...


You're wrong. Game theory also disagrees with you. Every strategy implemented was already proven to optimize poker hands. Just read the material they reference. Also, the entire bot was a poker bot. It was written to play poker. I don't understand how you don't get that. IT IS A POKER BOT! At no point do the authors deny this. They only point out that their end-game strategy may apply "more broadly".


We also showed that endgame solving guarantees a low exploitability in certain games, and presented a framework that can be used to evaluate its applicability more broadly.



The system LEARNS how to play the game. It's not taught how to play the game. It learns how to play the game. THIS HAS NOTHING TO DO WITH BRUTE FORCE CALCULATIONS. You just don't have a clue as to what you're talking about.


The author(s) state that they programmed in the Game Theory strategies, specifically ones for poker. Deepmind just updated its paddle variable until it reached the most optimal outcome. It doesn't need the concept of a paddle or a ball. It needs the "field" and the "score" and a variable to update (in this case, the "paddle position" variable). That's just how A.I. programming works. It's still given the ENTIRE FIELD and the score. You act like A.I. generates magic from nothing.


I should take issue with the guy who created the system and has explained it but not you?? You don't know what you're talking about.


What, is he your great Lord God, whose words shalt not be questioned?


This is why there's no brute force and the system learns because there's INCOMPLETE INFORMATION.


I'm not going to explain this again. You just don't have the capacity to understand it.



posted on Mar, 3 2017 @ 09:04 PM
link   
a reply to: neoholographic

Timelines aside. No doubt. "Supersmart Robots Will Outnumber Humans Within 30 Years."

But remember, they only need one.




Seriously though - there is so much that is quite wonderful. Must be a way to control it, protect ourselves, and maximize the benefits to the whole of humanity and our planet.



posted on Mar, 3 2017 @ 09:25 PM
link   
a reply to: Protector

Just what I expected. Another long winded post that says nothing.

You failed to answer a simple question because you're flat out lying. You said:

IT IS A POKER BOT! At no point do the authors deny this. They only point out that their end-game strategy may apply "more broadly".

This is just a lie. It's not a poker bot and Tuomas Sandholm goes out of his way to point this out in showing that this same system can be used in other areas.

“The algorithms we used are not poker specific,” Sandholm explains. “They take as input the rules of the game and output strategy.”

In other words, the Libratus algorithms can take the “rules” of any imperfect-information game or scenario and then come up with its own strategy. For example, the Carnegie Mellon team hopes its AI could design drugs to counter viruses that evolve resistance to certain treatments, or perform automated business negotiations. It could also power applications in cybersecurity, military robotic systems, or finance.


spectrum.ieee.org...

Tuomas Sandholm tells you in the video. This isn't poker specific or some poker bot. It's algorithms created to learn incomplete information scenarios. It wasn't designed or programmed for Poker.



At 5:45 in the video he tells you this and says WE HAVE NOT PROGRAMMED THE STRATEGY FOR POKER! You said:

Yes it was poker specific. I already thoroughly debunked this. Stop saying it. The only piece that is reusable is the basic A.I. programming that takes any dataset and optimizes it. That doesn't mean that they didn't teach it winning poker strategies.

THIS IS JUST FALSE!

Everything that you're saying is false and has nothing to do with anything that has been said.

You said it's poker specific - LIE

“The algorithms we used are not poker specific,” Sandholm explains. “They take as input the rules of the game and output strategy.”

You said it's a Poker Bot - LIE

Look at the video starting at 5:10 and he tells you this isn't some poker bot but an intelligent system designed for incomplete information scenarios.

You said the system was taught winning poker strategies - LIE

He explicitly says in the video WE HAVE NOT PROGRAMMED THE STRATEGY FOR POKER!

You can't accept that you're wrong and you're just making up your own facts. Again I ask:

SHOW ME WHERE Tuomas Sandholm SAID THEY TAUGHT IT WINNING POKER STRATEGIES AS YOU SAID!

Stop with the long diatribes that are meaningless because you're trying to obfuscate the fact that you don't know what you're talking about.

edit on 3-3-2017 by neoholographic because: (no reason given)



posted on Mar, 3 2017 @ 10:20 PM
link   
a reply to: neoholographic

Read this if you can:
www.cs.cmu.edu...

But I know you can't.

It describes, in detail, how the statistical processing for end-game strategies were developed, and how it was specifically tailored for Poker. They use statistical modeling based on Nash Equilibriums and a set of strategies developed for overcoming the old end-game strategy of "action translation". They then ran multiple models against Small and Large Game No Limit Texas Hold'em Poker to verify the validity of the algorithms. Then they choose the algorithm that best won AT POKER. IMAGINE THAT!

This answers most of my questions, although there is a small disconnect between the mathematical algorithms in this paper and the computer algorithms in the other (primarily related to bucket-size and how the reduce-game pre-computed solutions were used as part of the end-game... but that's all far too complicated for you). However, that might not be that big of a deal.


Stop with the long diatribes that are meaningless because you're trying to obfuscate the fact that you don't know what you're talking about.


Stop being a child. You were just schooled on everything related to this topic. Tuck in your tail and run away.



posted on Mar, 3 2017 @ 11:47 PM
link   
a reply to: Protector

LOL, you actually don't know what you're talking about. It's funny and sad because you're just lying to yourself. Anytime someone posts a pdf and they say go fish, they have no answers.

You said it's poker specific - LIE

“The algorithms we used are not poker specific,” Sandholm explains. “They take as input the rules of the game and output strategy.”

You said it's a Poker Bot - LIE

Look at the video starting at 5:10 and he tells you this isn't some poker bot but an intelligent system designed for incomplete information scenarios.

You said the system was taught winning poker strategies - LIE

He explicitly says in the video WE HAVE NOT PROGRAMMED THE STRATEGY FOR POKER!

You can't accept that you're wrong and you're just making up your own facts. Again I ask:

SHOW ME WHERE Tuomas Sandholm SAID THEY TAUGHT IT WINNING POKER STRATEGIES AS YOU SAID!


Did you even read the PDF? You obviously didn't and that's why you didn't quote from it. The PDF says the exact opposite of what you're saying.

Thus imperfect-information games cannot be solved via decomposition as perfect-information games can. Instead, the entire game is typically solved as a whole. This is a problem for large games, such as No-Limit Texas Hold’em—a common benchmark problem in imperfect-information game solving—which has 10^165 nodes (Johanson 2013). The standard approach to computing strategies in such large games is to first generate an abstraction of the game, which is a smaller version of the game that retains as much as possible the strategic characteristics of the original game (Sandholm 2010). This abstract game is solved (exactly or approximately)and its solution is mapped back to the original game. In extremely large games, a small abstraction typically cannot capture all the strategic complexity of the game, and therefore results in a solution that is not a Nash equilibrium when mapped back to the original game. For this reason, it seems natural to attempt to improve the strategy when a sequence farther down the game tree is reached and the remaining subtree of reachable states is small enough to be represented without any abstraction (or in a finer abstraction), even though—as explained previously—this may not lead to a Nash equilibrium. While it may not be possible to arrive at an equilibrium by analyzing subtrees independently, it may be possible to improve the strategies in those subtrees when the original (base) strategy is suboptimal, as is typically the case when abstraction is applied.

We first review prior forms of endgame solving for imperfect-information games. Then we propose a new form of endgame solving that retains the theoretical guarantees of the best prior methods while performing better in practice. Finally, we introduce a method for endgame solving to be nested as players descend the game tree, leading to substantially better performance.


There's not one word about Liberatus getting strategies for Poker. It's really nutty because Tuomas Sandholm explicitly says WE HAVE NOT PROGRAMMED THE STRATEGY FOR POKER!

Here's some key points from Wiki:


Libratus is an artificial intelligence computer program designed to play Poker, specifically no-limit Texas hold 'em. Libratus isn't Poker-specific, the algorithms and ideas employed by Libratus are of a very general nature and could be applied to a wide range of real-world problems.


LIBERATUS ISN'T POKER SPECIFIC!

The same thing Tuomas Sandholm said.

While Libratus was written from scratch, it is the nominal successor of Claudico. Like its predecessor, its name is a Latin expression and means 'balanced'.


Libratus was built with more than 15 million core hours of computation as compared to 2-3 million for Claudico. The computations were carried out on the new 'Bridges' supercomputer at the Pittsburgh Supercomputing Center. According to one of Libratus' creators, Professor Tuomas Sandholm, Libratus does not have a fixed built-in strategy, but an algorithm that computes the strategy.


The same thing Sanholm says in the video. He said:

WE HAVE NOT PROGRAMMED THE STRATEGY FOR POKER!


During the tournament, Libratus was competing against the players during the days. Overnight it was perfecting its strategy on its own by analysing the prior gameplay and results of the day, particularly its losses. Therefore, it was able to continuously straighten out the imperfections that the human team had discovered in their extensive analysis, resulting in a permanent arms race between the humans and Libratus.


en.wikipedia.org...

The system came up with it's strategy ON IT'S OWN!

You said:

Yes it was poker specific. I already thoroughly debunked this. Stop saying it. The only piece that is reusable is the basic A.I. programming that takes any dataset and optimizes it. That doesn't mean that they didn't teach it winning poker strategies.

I ask again:

SHOW ME WHERE Tuomas Sandholm SAID THEY TAUGHT IT WINNING POKER STRATEGIES AS YOU SAID!



posted on Mar, 4 2017 @ 12:23 AM
link   
a reply to: neoholographic

You're saying the same thing yet again. You have no argument. You ignore all of the math and all of the programming because you don't understand it.

You quote pop articles, but you can't follow the actual research. And stop quoting the intro/abstract/conclusion of articles, too. That gets lame. The big part you quoted above is explained in great detail, where everything is given proper context, comparing old techniques to new ones. Why they chose particular strategies and then how and why they work in the context of Poker were all explained. I'm not going to spoon feed the research to you anymore. You don't get it. And you don't really seem to care.

At this point, I can tell that you don't even understand what you are quoting. Nearly all of those quotes have been explained by the Poker bot creators and you take everything out of context and give it your own "pop culture" spin.


SHOW ME WHERE Tuomas Sandholm SAID THEY TAUGHT IT WINNING POKER STRATEGIES AS YOU SAID!


I already did. You just can't understand it. You're in over your head. There's nothing left for me to argue. You can't keep up with the research and you don't have the domain knowledge to understand the nuance of how everything fits together. I can't transfer several decades of mathematics and programming knowledge into your brain. If you want to learn it, you have to go learn it. There is no easy path to get there.

And part of how I know you don't understand it is that it would have taken you several hours to get through that paper. You didn't really read it. So you can't understand it. It's too complicated for you. It's complicated for me, and I have a background in this stuff. But that's what new research does... it pushes the boundaries of knowledge and introduces new approaches.

So I'm done with this topic. I wish you luck in your endeavors to understand A.I. I'm glad you're interested in it. Keep with it and you'll continue to improve. I'd be happy to point you toward some resources to help you, but you probably wouldn't take my advice because of your pride. Oh well.



posted on Mar, 4 2017 @ 08:18 AM
link   
a reply to: Protector

You haven't supported anything you have said. I asked you a simple question:

SHOW ME WHERE Tuomas Sandholm SAID THEY TAUGHT IT WINNING POKER STRATEGIES AS YOU SAID!

You tried to say you know more than the Researcher who created the system and for some reason we should listen to your nonsense.

You said it's poker specific - LIE

“The algorithms we used are not poker specific,” Sandholm explains. “They take as input the rules of the game and output strategy.”

You said it's a Poker Bot - LIE

Look at the video starting at 5:10 and he tells you this isn't some poker bot but an intelligent system designed for incomplete information scenarios.

You said the system was taught winning poker strategies - LIE

He explicitly says in the video WE HAVE NOT PROGRAMMED THE STRATEGY FOR POKER!

You can't accept that you're wrong and you're just making up your own facts. Again I ask:

SHOW ME WHERE Tuomas Sandholm SAID THEY TAUGHT IT WINNING POKER STRATEGIES AS YOU SAID!


You're under the DELUSION that the system had to be fed strategy. This is a lie:

That doesn't mean that they didn't teach it winning poker strategies.

Sandholm said WE HAVE NOT PROGRAMMED THE STRATEGY FOR POKER!

You think they taught the system how to play poker and that's just ASININE! This is why the system played billions of gpoker games against itself IN ORDER TO LEARN STRATEGY! This is what he said:

“The algorithms we used are not poker specific,” Sandholm explains. “They take as input the rules of the game and output strategy.”

IT TELLS YOU THIS IN THE ARTICLE:

First, the AI’s algorithms computed a strategy before the tournament by running for 15 million processor-core hours on a new supercomputer called Bridges.

spectrum.ieee.org...

THE ALGORITHM COMPUTED A STRATEGY! It wasn't given a strategy and this is why it played billions of games against itself.

Here's another article about Libratus:

“We didn’t tell Libratus how to play poker. We gave it the rules of poker and said ‘learn on your own’,” said Brown. The bot started playing randomly but over the course of playing trillions of hands was able to refine its approach and arrive at a winning strategy.


The algorithms that power Libratus aren’t specific to poker, which means the system could have a variety of applications outside of recreational games, from negotiating business deals to setting military or cybersecurity strategy and planning medical treatment – anywhere where humans are required to do strategic reasoning with imperfect information.


www.theguardian.com...

Why do you keep lying? Just say you're wrong after reading more information.

edit on 4-3-2017 by neoholographic because: (no reason given)



new topics

top topics



 
6
<< 1   >>

log in

join