It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: neoholographic
...with an unlimited supply of white and black game pieces...
originally posted by: PhoenixOD
Im not sure id count that as intelligence. The computer knows all the moves to all the games its been programmed with but the guy he is playing against doesn't. He has to use intelligence to make a move while the computer is simply copying other human strategies.
Take out the pre programmed moves and ill agree its intelligence.
originally posted by: neoholographic
originally posted by: PhoenixOD
Im not sure id count that as intelligence. The computer knows all the moves to all the games its been programmed with but the guy he is playing against doesn't. He has to use intelligence to make a move while the computer is simply copying other human strategies.
Take out the pre programmed moves and ill agree its intelligence.
Again, this makes no sense. The computer is learning in the exact same way that people learn.
originally posted by: GetHyped
originally posted by: neoholographic
originally posted by: PhoenixOD
Im not sure id count that as intelligence. The computer knows all the moves to all the games its been programmed with but the guy he is playing against doesn't. He has to use intelligence to make a move while the computer is simply copying other human strategies.
Take out the pre programmed moves and ill agree its intelligence.
Again, this makes no sense. The computer is learning in the exact same way that people learn.
Only if you simply it down to the point of being highly misleading.
originally posted by: SlapMonkey
a reply to: neoholographic
Maybe the dude made a mistake in playing the game that made it easier for the computer to win?
I don't know enough about the game (or anything about the game) to determine if this was human error or computer genius.
originally posted by: SlapMonkey
a reply to: neoholographic
That's all well and good, but professionals make mistakes all of the time, and some even throw the game because of different motivations. I'm not claiming he wasn't playing at his best, nor that he threw the game in order to make the AI seem more advanced than it actually is, but I'm just saying that it's a possibility worth considering until this experiment can be repeated multiple times by multiple professional players and the AI wins a vast majority of those times.
We should probably keep an eye on the stock of the company...
Google's DeepMind beats Lee Se-dol again to go 2-0 up in historic Go series
Human ingenuity beats human intuition again
Google stunned the world by defeating Go legend Lee Se-dol yesterday, and it wasn't a fluke — AlphaGo, the AI program developed by Google's DeepMind unit, has just won the second game of a five-game Go match being held in Seoul, South Korea. AlphaGo prevailed in a gripping battle that saw Lee resign after hanging on in the final period of byo-yomi ("second-reading" in Japanese) overtime, which gave him fewer than 60 seconds to carry out each move.
"Yesterday I was surprised but today it's more than that — I am speechless," said Lee in the post-game press conference. "I admit that it was a very clear loss on my part. From the very beginning of the game I did not feel like there was a point that I was leading." DeepMind founder Demis Hassabis was "speechless" too. "I think it's testament to Lee Se-dol's incredible skills," he said. "We're very pleased that AlphaGo played some quite surprising and beautiful moves, according to the commentators, which was amazing to see."
originally posted by: neoholographic
That makes no sense. Where is there a shred of evidence that this guy has ever threw the game? The Computer is playing him 5 times and the computer is 2-0. Here's more:
Google's DeepMind beats Lee Se-dol again to go 2-0 up in historic Go series
Human ingenuity beats human intuition again
Google stunned the world by defeating Go legend Lee Se-dol yesterday, and it wasn't a fluke — AlphaGo, the AI program developed by Google's DeepMind unit, has just won the second game of a five-game Go match being held in Seoul, South Korea. AlphaGo prevailed in a gripping battle that saw Lee resign after hanging on in the final period of byo-yomi ("second-reading" in Japanese) overtime, which gave him fewer than 60 seconds to carry out each move.
"Yesterday I was surprised but today it's more than that — I am speechless," said Lee in the post-game press conference. "I admit that it was a very clear loss on my part. From the very beginning of the game I did not feel like there was a point that I was leading." DeepMind founder Demis Hassabis was "speechless" too. "I think it's testament to Lee Se-dol's incredible skills," he said. "We're very pleased that AlphaGo played some quite surprising and beautiful moves, according to the commentators, which was amazing to see."
www.theverge.com...
What you're saying doesn't make any sense. The game had to go into overtime and other people who play the game were watching as the Champion made great moves but was outmaneuvered by the Computer.
You're acting like the guys at Deepmind our idiots. AI beating games have been talked about for years because games involve learning and strategy. This is why the computer is playing him 5 times.
I'm not claiming he wasn't playing at his best, nor that he threw the game in order to make the AI seem more advanced than it actually is, but I'm just saying that it's a possibility worth considering until this experiment can be repeated multiple times by multiple professional players and the AI wins a vast majority of those times.
The theory of reinforcement learning provides a normative account1, deeply rooted in psychological2 and neuroscientific3 perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems4, 5, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms3. While reinforcement learning agents have achieved some successes in a variety of domains6, 7, 8, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks9, 10, 11 to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games12. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
originally posted by: neoholographic
Deepmind's algorithm has learned to play 49 ATARI GAMES!
This has nothing to do with Magicians and everything to do with Science.
...
This isn't Magic or Penn and Teller this is Scientific Research.
What you're saying does an actual disservice to Scientific Research. You're making these statements in a vacuum that have no evidence to support anything you're saying.
...
It's not a possibility to consider because there's no evidence that this Player throws the games he plays and people watching the match that went into overtime talked about the brilliant moves he was making. In order for this to be a possibility, you need to present some actual evidence to support your assertion.
originally posted by: andy06shake
The game of Go has rules that an AI can be instructed to follow, granted they are rather complex and this is a significant achievement but until we can teach our AI to think outside the box they are still somewhat distant form achieving sentience in the same manner as us biological meat sacks.
I think what's far more significant about this than AI, is the fact that we've reached the cap of human ingenuity. It took a "learning computer" (I use that term loosely) to figure out how to play Go, rather than a programmer smart enough to program it. I find that far more fascinating.