It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial intelligence just reached a milestone that was supposed to be 10 years away

page: 1
21
<<   2  3  4 >>

log in

join
share:
+5 more 
posted on Mar, 11 2016 @ 12:28 AM
link   
I'm telling you, it's coming faster than most people think. When you look at the research in this area it's growing very fast and this is just the stuff available for public consumption. Remember, the internet was around in different forms for many years before the public got access to it.

The stuff Deep Mind and other companies must be doing behind closed doors has to be really amazing. Here's more:


Artificial intelligence just overcame a new hurdle: learning to play Go, a game thousands of times more complex than chess, well enough to beat the greatest human player at his own game. Twice.

AlphaGo, an artificial intelligence system developed by Google DeepMind, is two games into a six-day, five-game match with Lee Sedol, the world's best Go player. And so far, AlphaGo has won both games — meaning that if Sedol is going to triumph, he has to stage a quick comeback.

Go, a two-player game, is played on a board with 361 squares, with an unlimited supply of white and black game pieces, called stones. Players arrange the stones on the board to create "territories" by marking off parts of the board game, and can capture their opponent's pieces by surrounding them. The player with the most territory wins.

Although the rules are relatively simple, the number of possible combinations is nearly infinite — there are more ways to arrange the pieces on the board than there are atoms in the universe.

The computer's victory shocked Sedol. But it's also astounded experts, who thought that teaching computers to play Go well enough to beat a champion like Sedol would take another decade. AlphaGo did it by studying millions of games, just as Google's algorithms learn to identify photos by looking at millions of similar ones.


www.vox.com...

Here's the abstract from the paper:


The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.


www.nature.com...

Deep Mind and other companies in this area will change the way we live. Think big computers or large cellphones and then think about where artificial intelligence and quantum computers will be 20-30 years from now. You can check out a more in depth article at The Verge.

DEEPMIND WANTS TO APPLY MACHINE LEARNING TO SMARTPHONES, HEALTHCARE, AND ROBOTS

www.theverge.com...

Here's a video about Deep Mind and the game Go.



Also, how do we know how intelligent A.I. really is? If the intelligent machine doesn't want us to know how intelligent it is, how would we know?
edit on 11-3-2016 by neoholographic because: (no reason given)



posted on Mar, 11 2016 @ 12:33 AM
link   
a reply to: neoholographic



I'm telling you, it's coming faster than most people think.


Your posts are some of the very best, but I don't agree; No, it's not.

What's coming is only something that will ever make people think that, "Hey, that's a really neat and clever and well-programmed bot".

The "milestone" of achieving AI is just too steep an "ask" of human beings that do not already understand themselves well enough to deliver on it.

Simple.


edit on 11-3-2016 by Bybyots because: . : .



posted on Mar, 11 2016 @ 12:40 AM
link   
And yet we're still strapping smart phones to our faces (Samsung Gear VR) for "virtual reality" ...

*sigh*

Seriously? It's 2016 and computers are learning to play games, and we're just strapping a smart phone to our heads and calling it "virtual reality" for over $100.

Give me some ski goggles and duct tape, $30 and you too can have "virtual reality".



posted on Mar, 11 2016 @ 12:43 AM
link   
Although it is rather surprising that it was able to beat a person, to me it isn't that spectacular because a human had to program how the machine played. The intelligence reflects more on the creator than the machine.

Now if you ask a machine to play a game it knows nothing about and learned on its own to beat a human I'd be more impressed.


+2 more 
posted on Mar, 11 2016 @ 12:56 AM
link   

originally posted by: TheLotLizard
Although it is rather surprising that it was able to beat a person, to me it isn't that spectacular because a human had to program how the machine played. The intelligence reflects more on the creator than the machine.

Now if you ask a machine to play a game it knows nothing about and learned on its own to beat a human I'd be more impressed.


Again, this is just wrong. You have to understand how deep learning works. Nobody is telling the machine how to play. It learns how to play.

You can say, humans have to program other humans when we do things like go to school. The algorithm learns just like humans do through repetition and learning what works. The algorith learned to play Atari games. Nobody had to program it.


Google-backed startup DeepMind Technologies has built an artificial intelligence agent that can learn to successfully play 49 classic Atari games by itself, with minimal input.

DQN was only given pixel and score information, but was otherwise left to its own devices to create strategies and play 49 Atari games. This is compared to much-publicised AI systems such as IBM's Watson or Deep Blue, which rely on pre-programmed information to hone their skills.

"With Deep Blue there were chess grandmasters on the development team distilling their chess knowledge into the programme and it executed it without learning anything," said Hassabis. "Ours learns from the ground up. We give it a perceptual experience and it learns from that directly. It learns and adapts from unexpected things, and programme designers don't have to know the solution themselves."

"The interesting and cool thing about AI tech is that it can actually teach you, as the creator, something new. I can't think of many other technologies that can do that."


www.wired.co.uk...

Here's a video of the algorithm learning how to play a game.



There's no programming involved. The algorithm learns how to play the game.
edit on 11-3-2016 by neoholographic because: (no reason given)



posted on Mar, 11 2016 @ 01:05 AM
link   
a reply to: neoholographic



There's no programming involved. The algorithm learns how to play the game.


It's a Monte Carlo algorithm based on the closest thing we can come to the rules of attachment theory.

And incidentally it's lame.

*shrug*



posted on Mar, 11 2016 @ 01:07 AM
link   
a reply to: neoholographic


When you look at the research in this area it's growing very fast and this is just the stuff available for public consumption.


This right here is what has me simultaneously excited about the future and concerned about those at the forefront of the confidential developments within the AI tech industry. I remember years ago people would say that whatever they let us know about, they are at least 5 years ahead.

We are absolutely on the verge of changed in every single aspect of our lives with our robot friends. I saw this video promoting a new AI driven car from BMW, the AI of it is impressive enough, but what the hell kind of technology is the frame/tires made from!




posted on Mar, 11 2016 @ 01:08 AM
link   
Im not sure id count that as intelligence. The computer knows all the moves to all the games its been programmed with but the guy he is playing against doesn't. He has to use intelligence to make a move while the computer is simply copying other human strategies.

Take out the pre programmed moves and ill agree its intelligence.



posted on Mar, 11 2016 @ 01:12 AM
link   
a reply to: neoholographic

What happens if thee 'computers' develop the ability to communicate with others outside their own institutions? My mind immediately plugs one of these clever little clogs into one of the world's superpower war computers and with one side that bleeds whilst the other doesn't I can see the end line. I'm all for development but only with out intelligence controlling it, not down to a bloodless machine holding humanities existence in its brain.

I do think what humanity is developing is brilliant and we should congratulate ourselves but what protocols and who is programming them in are actually being put in place were anything to go wrong. Children outgrow their parents and although Babyots makes the point "humans do not already understand themselves well enough" AI perhaps doesn't have the need to understand itself, jut set goals for itself ?

I just hope that the scientists who are involved in AI's development are not so far into their own brilliant machines or competing against opposition corporation's machines that they don't forget to put in some safeguards to keep us all OK.



posted on Mar, 11 2016 @ 01:17 AM
link   
a reply to: neoholographic

It's a search algorithm that does what simulated annealing already does.

AlphaGo, that is.

So what?




posted on Mar, 11 2016 @ 01:23 AM
link   
a reply to: MystikMushroom

Google has plans for cardboard VR headsets for free. So buy your own tape I guess.

To the machine it is impressive. It was given ground rules and told to practice. Mimicking strategies and adapting them to situational relevance. To compare it to humans much of our intelligence is just this. Alexander the Great came up with the pincer claw technique what... 3000 years ago. Hitler killed with that some seventy years ago. I dunno if the tactic is used today but I'm sure it is.

This machine learns. And it is scary. But give credit where credit is due. We humans are close to doing in fifty years what nature has taken millions of years to achieve. Now we have not gotten there. And these machines may be the next step in the evolutionary line. But damn what a credit if we pull it off.
edit on 11-3-2016 by Sillyosaurus because: (no reason given)



posted on Mar, 11 2016 @ 01:47 AM
link   
a reply to: neoholographic



a game thousands of times more complex than chess,


This could be argued. Right?

Think this through with me? K?

Winning at Go?

It requires one to understand the culture from which it comes.

Everyone knows this. The first people that will tell you this are those in business, because they thought they were playing one game (Chess) and had to abruptly adapt to the fact that, when it came to the Chinese, they were playing another (Go).

It's not about how complicated it is, it's about another IQ, what has become known as EQ, or something like Emotional Intelligence, one needs social savvy to suss out Go.

Thousands of times more complex than chess, is a meaningless metric.



P.S. and it could easily be argued that Demis Hassabis beat Lee Sedol, not any Google Ai.


edit on 11-3-2016 by Bybyots because: . : .



posted on Mar, 11 2016 @ 02:10 AM
link   
I may have missed it, but part of why this is so amazing is that Go is not a game you can memorize moves in. Players play by instinct as much as anything else.

Going 2-0 against him is really crazy, far more so than beating someone at Chess since Go is a far more complex game.



posted on Mar, 11 2016 @ 02:14 AM
link   

originally posted by: Sillyosaurus Mimicking strategies and adapting them to situational relevance.

This is why it's so impressive. Go players do not use strategy like that, that is more a Chess thing. Go players look at the board and react more on intuition than anything else. It's like when you look at all your options and you just feel which option is better. You can't express it. You can't put it into an equation and explain it. You just know it even though you don't know why or how you know it.



posted on Mar, 11 2016 @ 02:35 AM
link   
Not AI exactly. Programmed to look for patterns and logic based place the next one. And as the game progresses and there are exponentially less options , becomes simpler to deduce the next placement. Remember IBMs Watson beating the chess grand master into submission ? Or Watson on Jeopardy ?
Lets have true AI.


edit on 2201631020320162 by Gothmog because: (no reason given)



posted on Mar, 11 2016 @ 02:48 AM
link   
a reply to: Gothmog

Chess =/= go. Like comparing a child walking to Usain Bolt.



posted on Mar, 11 2016 @ 02:52 AM
link   

originally posted by: Bybyots
a reply to: neoholographic

It's a search algorithm that does what simulated annealing already does.

AlphaGo, that is.

So what?







My first thought was...

Oh like scripting mud games?



posted on Mar, 11 2016 @ 02:58 AM
link   
Elon Musk just bought shares in the Andrex toilet paper company.




posted on Mar, 11 2016 @ 03:36 AM
link   
a reply to: neoholographic

Deep mind has nothing to do with self awareness and general intelligence.



posted on Mar, 11 2016 @ 06:05 AM
link   

originally posted by: OccamsRazor04
a reply to: Gothmog

Chess =/= go. Like comparing a child walking to Usain Bolt.

But the basics in each are exactly the same . Please insert another token and try again.




top topics



 
21
<<   2  3  4 >>

log in

join