It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial intelligence just reached a milestone that was supposed to be 10 years away

page: 2
21
<< 1    3  4 >>

log in

join
share:

posted on Mar, 11 2016 @ 06:12 AM
link   
a reply to: neoholographic

The game of Go has rules that an AI can be instructed to follow, granted they are rather complex and this is a significant achievement but until we can teach our AI to think outside the box they are still somewhat distant form achieving sentience in the same manner as us biological meat sacks. Can it pass the Turing test?



posted on Mar, 11 2016 @ 06:17 AM
link   
Now this is a breakthrough :

First silicon-based, long-lasting nuclear spin qubit created by quantum researchers

Quantum Computing anyone ? Computing at speeds no longer can be stated in ghz , thz , etc.
edit on 6201631060320166 by Gothmog because: (no reason given)



posted on Mar, 11 2016 @ 09:10 AM
link   
This whole go playing computer thing happened weeks ago.
I'm sure there are multiple threads on it by now.
It was big news then.



posted on Mar, 11 2016 @ 09:16 AM
link   

originally posted by: neoholographic
...with an unlimited supply of white and black game pieces...


Wut?


No wonder the boxes in the store were so heavy.



posted on Mar, 11 2016 @ 10:13 AM
link   

originally posted by: PhoenixOD
Im not sure id count that as intelligence. The computer knows all the moves to all the games its been programmed with but the guy he is playing against doesn't. He has to use intelligence to make a move while the computer is simply copying other human strategies.

Take out the pre programmed moves and ill agree its intelligence.


Again, this makes no sense. The computer is learning in the exact same way that people learn. You said the computer knows all the moves beforehand. First, humans have to learn the moves beforehand in order to play the game and secondly the computer is learning to play the game.

Did you read what the Researchers said or watched the actual video of them explaining this?

The Computer learns how to play by playing the game over and over again. There's no moves programmed into the computer. The Researchers explained this when they talked about the computer learning to play Atari games.

"With Deep Blue there were chess grandmasters on the development team distilling their chess knowledge into the programme and it executed it without learning anything," said Hassabis. "Ours learns from the ground up. We give it a perceptual experience and it learns from that directly. It learns and adapts from unexpected things, and programme designers don't have to know the solution themselves."

Again, these people are not idiots and this is why Google bought the company for 400 million. They know the difference between pre programming a computer and DEEP LEARNING where the computer learns how to do these things.

If you watch the video, the Researchers tell you, the way the computer learns how to play the game is by playing the game over and over again. This will be a huge advantage for AI.

A human learns by playing the game over and over again but it can only play so many games. The computer can play the game like a million times a day as the Researcher said, which gives it a huge advantage over it's human counterpart.

So there's no need to pre program anything. This is why it's called deep learning. When I was learning how to play chess, I had to be "programmed" on the chess pieces and the moves they can make on the board. I then learned how to play the game by playing it over and over again. The computer does the same thing but it can play the game a million times in a day while I can only play the game maybe a few times a day.
edit on 11-3-2016 by neoholographic because: (no reason given)



posted on Mar, 11 2016 @ 10:19 AM
link   

originally posted by: neoholographic

originally posted by: PhoenixOD
Im not sure id count that as intelligence. The computer knows all the moves to all the games its been programmed with but the guy he is playing against doesn't. He has to use intelligence to make a move while the computer is simply copying other human strategies.

Take out the pre programmed moves and ill agree its intelligence.


Again, this makes no sense. The computer is learning in the exact same way that people learn.


Only if you simply it down to the point of being highly misleading.



posted on Mar, 11 2016 @ 10:40 AM
link   

originally posted by: GetHyped

originally posted by: neoholographic

originally posted by: PhoenixOD
Im not sure id count that as intelligence. The computer knows all the moves to all the games its been programmed with but the guy he is playing against doesn't. He has to use intelligence to make a move while the computer is simply copying other human strategies.

Take out the pre programmed moves and ill agree its intelligence.


Again, this makes no sense. The computer is learning in the exact same way that people learn.


Only if you simply it down to the point of being highly misleading.


What?

This makes no sense based on the papers published by the Researchers in the publication Nature. They have a ton of Research papers and videos, all you have to do is look them over. They base these things on reinforcement learning which mirrors how we learn things.



posted on Mar, 11 2016 @ 10:43 AM
link   
a reply to: neoholographic

Maybe the dude made a mistake in playing the game that made it easier for the computer to win?

I don't know enough about the game (or anything about the game) to determine if this was human error or computer genius.



posted on Mar, 11 2016 @ 11:30 AM
link   

originally posted by: SlapMonkey
a reply to: neoholographic

Maybe the dude made a mistake in playing the game that made it easier for the computer to win?

I don't know enough about the game (or anything about the game) to determine if this was human error or computer genius.



The dude is the world master in playing the game. Here's more:

Lee Sedol (이세돌; born 2 March 1983) is a South Korean professional Go player of 9-dan rank.[1] As of February 2016, he ranks second in international titles (18), behind only Lee Chang-ho (21).

The Computer is playing a guy who ranks second in international titles in this game.

The Researchers explain DEEP LEARNING and exactly how this is being acheived. AI is advancing rapidly and this is why these Companies have been buying Deep Learning companies for millions of dollars.



posted on Mar, 11 2016 @ 11:40 AM
link   
a reply to: neoholographic

That's all well and good, but professionals make mistakes all of the time, and some even throw the game because of different motivations. I'm not claiming he wasn't playing at his best, nor that he threw the game in order to make the AI seem more advanced than it actually is, but I'm just saying that it's a possibility worth considering until this experiment can be repeated multiple times by multiple professional players and the AI wins a vast majority of those times.

We should probably keep an eye on the stock of the company...


edit on 11-3-2016 by SlapMonkey because: (no reason given)



posted on Mar, 11 2016 @ 11:48 AM
link   

originally posted by: SlapMonkey
a reply to: neoholographic

That's all well and good, but professionals make mistakes all of the time, and some even throw the game because of different motivations. I'm not claiming he wasn't playing at his best, nor that he threw the game in order to make the AI seem more advanced than it actually is, but I'm just saying that it's a possibility worth considering until this experiment can be repeated multiple times by multiple professional players and the AI wins a vast majority of those times.

We should probably keep an eye on the stock of the company...



That makes no sense. Where is there a shred of evidence that this guy has ever threw the game? The Computer is playing him 5 times and the computer is 2-0. Here's more:


Google's DeepMind beats Lee Se-dol again to go 2-0 up in historic Go series

Human ingenuity beats human intuition again

Google stunned the world by defeating Go legend Lee Se-dol yesterday, and it wasn't a fluke — AlphaGo, the AI program developed by Google's DeepMind unit, has just won the second game of a five-game Go match being held in Seoul, South Korea. AlphaGo prevailed in a gripping battle that saw Lee resign after hanging on in the final period of byo-yomi ("second-reading" in Japanese) overtime, which gave him fewer than 60 seconds to carry out each move.

"Yesterday I was surprised but today it's more than that — I am speechless," said Lee in the post-game press conference. "I admit that it was a very clear loss on my part. From the very beginning of the game I did not feel like there was a point that I was leading." DeepMind founder Demis Hassabis was "speechless" too. "I think it's testament to Lee Se-dol's incredible skills," he said. "We're very pleased that AlphaGo played some quite surprising and beautiful moves, according to the commentators, which was amazing to see."


www.theverge.com...

What you're saying doesn't make any sense. The game had to go into overtime and other people who play the game were watching as the Champion made great moves but was outmaneuvered by the Computer.

You're acting like the guys at Deepmind our idiots. AI beating games have been talked about for years because games involve learning and strategy. This is why the computer is playing him 5 times.



posted on Mar, 11 2016 @ 12:21 PM
link   

originally posted by: neoholographic
That makes no sense. Where is there a shred of evidence that this guy has ever threw the game? The Computer is playing him 5 times and the computer is 2-0. Here's more:


Google's DeepMind beats Lee Se-dol again to go 2-0 up in historic Go series

Human ingenuity beats human intuition again

Google stunned the world by defeating Go legend Lee Se-dol yesterday, and it wasn't a fluke — AlphaGo, the AI program developed by Google's DeepMind unit, has just won the second game of a five-game Go match being held in Seoul, South Korea. AlphaGo prevailed in a gripping battle that saw Lee resign after hanging on in the final period of byo-yomi ("second-reading" in Japanese) overtime, which gave him fewer than 60 seconds to carry out each move.

"Yesterday I was surprised but today it's more than that — I am speechless," said Lee in the post-game press conference. "I admit that it was a very clear loss on my part. From the very beginning of the game I did not feel like there was a point that I was leading." DeepMind founder Demis Hassabis was "speechless" too. "I think it's testament to Lee Se-dol's incredible skills," he said. "We're very pleased that AlphaGo played some quite surprising and beautiful moves, according to the commentators, which was amazing to see."


www.theverge.com...

What you're saying doesn't make any sense. The game had to go into overtime and other people who play the game were watching as the Champion made great moves but was outmaneuvered by the Computer.

You're acting like the guys at Deepmind our idiots. AI beating games have been talked about for years because games involve learning and strategy. This is why the computer is playing him 5 times.


No, I'm acting like there might be another possibility that just a decade-advanced AI beating a human. I'm glad to see the computer is playing him five times; what I'd like to see is the computer play 10 players five times each.

I'm coming at this from a scientific point of view--one or two similar results does not a truth make, but 50 similar results using different controlled scenarios does point to a truth.

I don't know why that doesn't make sense to you. I was just throwing out off-the-cuff variables that CAN BE a possibility when dealing with human subjects of an experiment. It wouldn't be the first time a champ of some sort sold themselves out--but again, and to quote myself:


I'm not claiming he wasn't playing at his best, nor that he threw the game in order to make the AI seem more advanced than it actually is, but I'm just saying that it's a possibility worth considering until this experiment can be repeated multiple times by multiple professional players and the AI wins a vast majority of those times.

That approach makes perfect sense.

ETA: Keep in mind that there are some magicians who can even fool Penn and Teller--hell, there's a TV show about it. So let's not pretend that just because other professionals are watching doesn't mean that there can't be something less-than-honest going on.
edit on 11-3-2016 by SlapMonkey because: (no reason given)



posted on Mar, 11 2016 @ 12:53 PM
link   
a reply to: SlapMonkey

Are you serious?

You do know that research in these areas have been going on for years. What you're saying isn't Scientific at all. Like I said, you need to read the research. You're not providing any scientific evidence to support anything you're saying. You're just making accusations in a vacuum that makes no sense.

These things have been studied and researched for year and published in Scientific Journals like Nature. You said:

I'm coming at this from a scientific point of view--one or two similar results does not a truth make, but 50 similar results using different controlled scenarios does point to a truth.

No you're not especially when you talk about Magicians and Penn and Teller. It has nothing to do with the Research or Science in any way, shape or form.

Here's a paper from 2014:

Human-level control through deep reinforcement learning


The theory of reinforcement learning provides a normative account1, deeply rooted in psychological2 and neuroscientific3 perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems4, 5, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms3. While reinforcement learning agents have achieved some successes in a variety of domains6, 7, 8, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks9, 10, 11 to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games12. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.


www.nature.com...

Deepmind's algorithm has learned to play 49 ATARI GAMES!

This has nothing to do with Magicians and everything to do with Science. This is why these Researchers publish these papers. Here's more:

Mastering the game of Go with deep neural networks and tree search


The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.


www.nature.com...

This is from November 2015.

The Computer beat the European Go Champion 5 games to 0 and is now playing another Champion and is beating him 2-0 and it learned how to play 49 Atari games without any instruction.

This isn't Magic or Penn and Teller this is Scientific Research. The Researchers show you and explain what's happening. Here's some videoes:





What you're saying does an actual disservice to Scientific Research. You're making these statements in a vacuum that have no evidence to support anything you're saying. You said:

I'm not claiming he wasn't playing at his best, nor that he threw the game in order to make the AI seem more advanced than it actually is, but I'm just saying that it's a possibility worth considering

It's not a possibility to consider because there's no evidence that this Player throws the games he plays and people watching the match that went into overtime talked about the brilliant moves he was making. In order for this to be a possibility, you need to present some actual evidence to support your assertion.
edit on 11-3-2016 by neoholographic because: (no reason given)



posted on Mar, 11 2016 @ 01:17 PM
link   

originally posted by: neoholographic
Deepmind's algorithm has learned to play 49 ATARI GAMES!


So, what I was saying is that I'd like to see the machine achieve the same results 50 times (just a round number that I came up with) in total against 5 different players (again, just a number I came up with). I said that because I felt that it would go a long way into proving a scientific approach to validate the results of just one or two wins.

What you've done is called me a non-scientific imbecil, for the most part, for proposing this, and then go on to prove that Deepmind did basically the exact same thing (but without the human-element variable) to prove that their algorithm has merit.

Again, I'm not making sense for wanting such proof now that there is the variable of a human constant (which should vary between humans if we're looking for true results), yet you provide similar proof (that doesn't have a human element to it) to make the claim that their algorithm works.

Do you see the paradox you're presenting, here?


This has nothing to do with Magicians and everything to do with Science.

...

This isn't Magic or Penn and Teller this is Scientific Research.


The only reason I brought up magic (and we both know that doesn't exist) and Penn and Teller is because you used the logical fallacy of implying that shenanigans by either the designers or the participant is impossible because other experts were there watching. That proves nothing, and I used an example of experts getting fooled in their field of expertise to illustrate THAT point and nothing more. You are bastardizing that point to make me sound ignorant to science and the scientific method, and that in and of itself is irresponsible and dishonest.


What you're saying does an actual disservice to Scientific Research. You're making these statements in a vacuum that have no evidence to support anything you're saying.

...

It's not a possibility to consider because there's no evidence that this Player throws the games he plays and people watching the match that went into overtime talked about the brilliant moves he was making. In order for this to be a possibility, you need to present some actual evidence to support your assertion.


No, I'm basing my concerns on the knowledge and reality that experiments can be and are flawed all of the time.

But hopefully we both know that a lack of evidence does not equate to evidence to the contrary. Any and all good scientists rack their brains considering all possible reasons why the results of an experiment may be tainted with inaccuracies that will negate the findings. I proposed a couple possibilities--and they are exactly that: Possibilities. Whether you want to entertain that reality or not is none of my concern, to be honest, and just because you want to argue that entertaining such possibilities, no matter how improbable, as being unscientific thinking is asinine behavior, IMO.

But in the end, that's all with which we left--your opinion and mine. You're welcome to yours, just don't berate me for mine when there is good reason for me to have it (whether you think so or not).

But, if you're comfortable claiming some new technology is factual and verified beyond scientific doubt based on a couple experiments against one human, that's up to you. I consider that irresponsible, but to each their own.

Have a fabulous day.
edit on 11-3-2016 by SlapMonkey because: (no reason given)



posted on Mar, 11 2016 @ 01:25 PM
link   
a reply to: neoholographic

I think what's far more significant about this than AI, is the fact that we've reached the cap of human ingenuity. It took a "learning computer" (I use that term loosely) to figure out how to play Go, rather than a programmer smart enough to program it. I find that far more fascinating. It's no surprise that a computer can deduce the most efficient patterns and win a game....why would anyone be surprised by this? That's what computers were designed to do from the beginning.

It's quite telling that in 2016, there exists no programmer smart enough to create a program that can play Go at a high level, with his own ingenuity.

As far as the AI is concerned, all I can say is... "It's about bleeping time". To be honest, we should have already been capable of such feats. I feel like AI is severely limited, not by technology, but by our own incompetence. I want be clear though that I'm speaking of AI as a purely garbage in, garbage out system. AI will never be anything more than a glorified abacus.
edit on 11-3-2016 by Aedaeum because: (no reason given)



posted on Mar, 11 2016 @ 01:37 PM
link   
a reply to: SlapMonkey

Again, you haven't presented a shred of evidence to support anything you're saying. You said:

What you've done is called me a non-scientific imbecil, for the most part, for proposing this, and then go on to prove that Deepmind did basically the exact same thing (but without the human-element variable) to prove that their algorithm has merit.

First off, this is my point. You made these statements in a vacuum that made no sense. If you would have bothered to watch the videos or read a few pages from the links I provided, you would have answered your own question. This is why I posted the links to the research and the videos.

You said:

The only reason I brought up magic (and we both know that doesn't exist) and Penn and Teller is because you used the logical fallacy of implying that shenanigans by either the designers or the participant is impossible because other experts were there watching. That proves nothing, and I used an example of experts getting fooled in their field of expertise to illustrate THAT point and nothing more. You are bastardizing that point to make me sound ignorant to science and the scientific method, and that in and of itself is irresponsible and dishonest.

This is just PURE NONSENSE.

You can't make silly statements like these in a vacuum without a shred of evidence that supports your assertion. Do you know how complicated this game is?

In order for you to be correct, the people watching have to be idiots, the European Champion would have to be lying and the person he's playing now would have to be lying while going into overtime to try and win the game. You will also have to say that they got the Atari game to lie as well LOL.

What you're saying doesn't make any sense and again, you haven't provided one shred of evidence to support anything you're saying. You said:

But, if you're comfortable claiming some new technology is factual and verified beyond scientific doubt based on a couple experiments against one human, that's up to you. I consider that irresponsible, but to each their own.

WHAT???

What does factual and verified beyond scientific doubt mean?

You can't be serious. Again, I've showed you the research and the videos. You haven't presented anything that backs anything you're saying. When you say things like factual and verified beyond scientific doubt, it just shows you don't know what you're talking about and you haven't bothered to even read the published papers to try and understand these things.



posted on Mar, 11 2016 @ 01:42 PM
link   
a reply to: neoholographic

Okay, but only since you say so.

I'm smart enough to know when I'm wasting my time. I think you are, too.

Best regards. (that means I'm done with this merry-go-round)



posted on Mar, 11 2016 @ 03:04 PM
link   

originally posted by: andy06shake
The game of Go has rules that an AI can be instructed to follow, granted they are rather complex and this is a significant achievement but until we can teach our AI to think outside the box they are still somewhat distant form achieving sentience in the same manner as us biological meat sacks.

Pretty much. True sentient AI would be the system that sends a message to the programmers: "This game sucks and I refuse to play it, even though I could easily beat you all."



posted on Mar, 11 2016 @ 04:18 PM
link   
Human brains neural network has enormous interconnectivity with over 1,000 trillion synaptic connections so I wouldn't be too worried about robots just yet. But if they start organically growing neurons, then interfacing them electronically to create a cyborg, run for the hills.



posted on Mar, 11 2016 @ 04:34 PM
link   
a reply to: Aedaeum




I think what's far more significant about this than AI, is the fact that we've reached the cap of human ingenuity. It took a "learning computer" (I use that term loosely) to figure out how to play Go, rather than a programmer smart enough to program it. I find that far more fascinating.


Exactly. Because of limitations in our thinking and limitations in our tech we have been essentially working on the wrong end of the problem for quite some time.

Watch how fast things move along now. And the public has only been made aware of a small portion of the advances.

Sometimes your tax dollars really do work. Especially those spent by DARPA. You know, those guys that *actually* did invent the internet (among many other things)?

Watch this space...




top topics



 
21
<< 1    3  4 >>

log in

join