It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

How algorithms secretly run the world

page: 4
17
<< 1  2  3    5 >>

log in

join
share:

posted on Feb, 14 2017 @ 06:40 PM
link   

originally posted by: neoholographic
This makes no sense. You have to structure the problem for humans when they play games and most of the times humans have instructions on how to play the game.


The rules and the problems structure are not the same things. Structure refers to things such as how data is arranged and iterated over. For example, this is a solution tree for tic tac toe. By sending inputs and measuring how the game state develops, alongside the percentage chance of each move leading to another move, I can over time generate a Markov process that fully solves the game. That's not the only way to structure moves though.

As an alternative approach to the problem, I could instead label the boxes 1-9, and assign 2 bits to each one, 00=empty, 01=X, 10=O. This would result in a bitstring that's 18 bits long and represents the gamestate. It could begin with every tile being empty giving you a gamestate of 000000000000000000. From there I could randomly place an X such as 000000000100000000 (this is the center tile), then place an O with player 2, and so on. Then I can apply genetics to this bitstring, and determine which positions and sequences of moves lead to the best outcomes.

This represents two fundamentally different approaches to solving tic tac toe and require two completely different program structures, yet the rules of the game are the same in both. It also uses two totally different algorithms. It's up to the programmer to decide which they want to use. AI's aren't capable of making that decision, because AI's don't recognize abstract problems. Furthermore, even if they did, it would technically be two different AI's since it's two different functions even if which is chosen falls under the same executable program.



When AlphaGo won in the game of Go it was HUGE. In this area, it's obvious you don't understand what took place. The reason Elon Musk and and the C.E.O. of DeepMind were so excited about this is because they thought these milestones were 5 to 10 years away.


Elon Musk is a blowhard. His companies do cool things, but the man himself is a real piece of work and not in a good way.

Rather than link me articles, why not go to the source? I've read it, I even understand it. Have you?
gogameguru.com...

That's the paper the deepmind team published on this. You can read it for yourself.

I'll admit that I wasn't aware they had managed to successfully play a 19x19 grid game until reading this, that's pretty impressive. The techniques though aren't anything new. Basically they did this by using a little bit of each technique and mixing them all together. It's not some new breakthrough. The exact mix of everything they did hadn't been done before, but it's just a small iteration on what was already there.



The fact that you try to act like this is something that's just so simple shows your ignorance in this area. If it was so simple, why aren't you creating an A.I. company and selling it for $500 million?


Because I'm still in school, learning to do the things I want to do in life. I don't want to be an AI researcher, it's interesting from the standpoint that I like to know how stuff works, and I need some knowledge in it, but I prefer doing VR/AR work. If you understand this stuff like you claim, then why aren't you doing it?



The reason AlphaGo was seen as such a milestone is because it did something very important. It made itself better without human intervention.


That's not what the milestone was. The "milestone" if you even want to call it that, was that it was making decisions based on only looking a few moves deep. Basically, by combining a few techniques they were able to reduce the search space when following a tree by orders of magnitude. This let the computer evaluate only the better scoring moves from a given position rather than everything.

This is going to have implications in several games that are using Monte Carlo searches to determine moves, but it's not Skynet or even a step towards Skynet because there's nothing about it from the machine side that's actually intelligent. It's like counting to 100 by 2's rather than 1's. It's more efficient, but not more intelligent.
edit on 14-2-2017 by Aazadan because: (no reason given)



posted on Feb, 14 2017 @ 07:23 PM
link   
a reply to: Aazadan

You said:

It's not some new breakthrough.

Yet, just about every researcher in the field of A.I. said it was a big breakthrough and it's something they didn't think would happen until 5 to 10 years down the road.

Why should anyone believe you over the C.E.O. of DeepMind and other A.I. Researchers?

You have shown in this thread you don't understand the issue and you don't even understand how A.I. and Big Data are connected.

You keep saying it's not intelligent but you haven't even defined what you mean by intelligence. You keep making these general statements that haven't refuted anything that has been said in this thread.

Here's the definition of intelligence.

Intelligence

a (1) : the ability to learn or understand or to deal with new or trying situations : reason; also : the skilled use of reason (2) : the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (as tests)


You made the claim that these networks aren't intelligent.

You have A.I. systems that learn how to play games without instructions.

You have A.I. systems that can manipulate their environment in order to compete a task

You have A.I. systems that take I.Q. tests

You have A.I. systems that beat Champions at Go and Poker

Just listen to what you said:

This let the computer evaluate only the better scoring moves from a given position rather than everything.

THIS IS WHAT HUMANS DO!

This is intelligence. I've noticed that you haven't answered these simple questions. You haven't responded to anything that has been said because you don't understand the issue.

The algorithm learns through trial and error like humans. The computer learns these better scoring moves through trial and error and just like humans it uses these better scoring moves during each game to get a better score the next game.

Again I ask, how do you define intelligence?

Intelligence

a (1) : the ability to learn or understand or to deal with new or trying situations : reason; also : the skilled use of reason (2) : the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (as tests)


Do you have some secret definition that nobody but you knows about?



posted on Feb, 14 2017 @ 08:14 PM
link   

originally posted by: neoholographic
Yet, just about every researcher in the field of A.I. said it was a big breakthrough and it's something they didn't think would happen until 5 to 10 years down the road.


Time tables are rarely accurate. That's just the way things are. With any computer task, from estimating the time to write a page of HTML, to estimating time to complete a challenging project. No one is good at scheduling. That's just the nature of the industry.



Why should anyone believe you over the C.E.O. of DeepMind and other A.I. Researchers?


Elon Musk isn't an AI researcher. He's concerned about Skynet, not about reality. The other AI Rearchers, who you can read all about in the paper I wrote, said it was an advancement, but it's not some huge revolutionary breakthrough. It's a more efficient system to handle a few problems that use very deep markov chains. They wanted to look at MTG next (another game with deep chains), and see if they can play it optimally.



You have shown in this thread you don't understand the issue and you don't even understand how A.I. and Big Data are connected.


That's because they only have minor connections. You're just throwing buzzwords at me that you read out of magazines of questionable material. Big data is about generating, storing, and retrieving huge sums of data, along with a little bit of meta data that tells you what you have. Filtering this isn't done by AI. You can use AI's to look through subsets of this data and look for patterns, but humans are the ones who need to outline what those patterns might be first. Beyond that, AI doesn't actually want large amounts of data. The process of AI is generally one where you create a solution out of very little starting information. Other types of data modification such as filtering things (which is very hard with large databases) is what big data is concerned with. The real intersection here would be in a big data system being able to generate a suitable database for an AI to work off of.


You keep saying it's not intelligent but you haven't even defined what you mean by intelligence. You keep making these general statements that haven't refuted anything that has been said in this thread.


Capable of making arbitrary decisions. No AI's in existence today are even capable of making a decision much less an arbitrary one. It's just a bunch of comparison operators, random numbers, and a predefined goal to maximize or minimize based on the input given.



a (1) : the ability to learn or understand or to deal with new or trying situations : reason; also : the skilled use of reason (2) : the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (as tests)


AI's do not do this. They cannot reason, because they cannot apply lessons learned in one game and apply it to another. Or even the same game from a different starting position.



posted on Feb, 14 2017 @ 08:50 PM
link   
a reply to: Aazadan

I applaud your patience and willingness to educate. It's apparent the person that you've been debating with is ill-informed on the subject and gets the bulk of their information from trending news articles and the occasional youtube video. Thanks for your input on the subject matter; I've come to many of the same conclusions.



posted on Feb, 14 2017 @ 09:51 PM
link   
a reply to: Aazadan

You said:

but it's not some huge revolutionary breakthrough.

Yes it was and this is why people in the field of Physics are so excited. Again, these are real world problems that will be solved by a system like AlphaGo. It's ASININE to try to act like this is just something simple. It's easy to say this on a vacuum on a message board but again, you're not providing a shred of evidence to refute what has been said.


AI learns to solve quantum state of many particles at once

The same type of artificial intelligence that mastered the ancient game of Go could help wrestle with the amazing complexity of quantum systems containing billions of particles.

Google’s AlphaGo artificial neural network made headlines last year when it bested a world champion at Go. After marvelling at this feat, Giuseppe Carleo of ETH Zurich in Switzerland thought it might be possible to build a similar machine-learning tool to crack one of the knottiest problems in quantum physics.

Now, he has built just such a neural network – which could turn out to be a game changer in understanding quantum systems.

Go is far more complex than chess, in that the number of possible positions on a Go board could exceed the number of atoms in the universe. That’s why an approach based on brute-force calculation, while effective for chess, just doesn’t work for Go.

In that sense, Go resembles a classic problem in quantum physics: how to describe a quantum system that consists of many billions of atoms, all of which interact with each other according to complicated equations.

“It’s like having a machine learning how to crack quantum mechanics, all by itself,” Carleo says. “I like saying that we have a machine dreaming of Schrödinger’s cat.”


Link

Anyone who knew anyone about physics knows this isn't just something simple.

But the weird rules of quantum mechanics mean we can’t know a quantum particle’s precise location at every point in time. Many quantum particles also have a property called “spin”, which can be either up or down. The number of spin-based states that a group of just 100 such particles could inhabit is almost a million trillion trillion (1030).

The current record for simulating such a system, using our most powerful supercomputers, is 48 spins. Carleo estimates that even if we could turn the entire planet into a giant hard drive, we would still only be able to do these calculations for 100 spins at most.


Link

This is a huge steps towards strong artificial intelligence.

You have Artificial Intelligence that outperforms humans on an I.Q. tests.

Artificially Intelligent Computer Outperforms Humans on IQ Test

The deep learning machine can reach the intelligence level between people with bachelor degrees and people with master degrees



The test contains three categories of questions: logic questions (patterns in sequences of images); mathematical questions (patterns in sequences of numbers); and verbal reasoning questions (questions dealing with analogies, classifications, synonyms and antonyms). Computers have never been too successful at solving problems belonging to the final category, verbal reasoning, but the machine built for this study actually outperformed the average human on these questions.

The researchers had the deep learning machine and 200 human subjects at Amazon’s Mechanical Turk crowdsourcing facility answer the same verbal questions. The result: their system performed better than the average human.


observer.com...

Yes, A.I.'s are intelligent. Wikipedia classifies A.I. as a form of Intelligence.


Artificial intelligence is intelligence in machines. It is commonly implemented in computer systems using program software.

Artificial intelligence (or AI) is both the intelligence of machines and the branch of computer science which aims to create it, through "the study and design of intelligent agents"[30] or "rational agents", where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.[31] Achievements in artificial intelligence include constrained and well-defined problems such as games, crossword-solving and optical character recognition and a few more general problems such as autonomous cars.[32] General intelligence or strong AI has not yet been achieved and is a long-term goal of AI research.


en.wikipedia.org...

Like I said earlier, you don't understand this subject because you're equating all intelligence with human intelligence and that's just asinine.

This is why Wiki has a separate article for human intelligence and an article for intelligence in general.

en.wikipedia.org...

Human intelligence takes into account consciousness and self awareness. Again, you're confused because you don't know how Science looks at intelligence vs human intelligence.
edit on 14-2-2017 by neoholographic because: (no reason given)



posted on Feb, 15 2017 @ 10:35 AM
link   
a reply to: neoholographic

Other than being a jerk to Aazadan, you are missing several fundamental issues with your own argument.

Big Data is known as a "buzzword" in the computer science industry. Academics love to use this buzzword because it makes them look good to those who are investing in research--so it's about the money. And it's the same story in industry. If you want people to invest in your company, you say you're doing NEW, BIG THINGS WITH BIG DATA so that idiotic, uninformed investors will throw money at you. And people keep doing it because it works! Before Big Data was a buzzword, people used big data all the time. It's just a name given to a larger-than-average database, or the contents of several databases with related information. The only thing that's changed is that there is more data today than yesterday.

Same with "The Cloud". It's just a data center with more than one machine, just like what existed before people used "The Cloud" as a buzzword. If you connect 2 computers together in one of your closets, and can access those computers from the internet, you just created a "cloud".

When you use buzzwords in an argument, it is proof that you don't understand the industry... because these terms are only used with people who you're attempting to "sell something" to. People in the industry either say the company name or the technology used, so "Amazon AWS" or "Hadoop" or "Redis". These examples are all completely different technologies that store data in completely different ways.

Big Data is used in reference to AI to prove to investors that you have a sufficient amount of data to train your AI, because AIs need a lot of data to make any type of meaningful connection in a practical setting... plus you can ask for more money.

AlphaGo isn't really that impressive to me. Don't get me wrong, it isn't "unimpressive", but it isn't highly impressive either. A Poker bot that learns to bluff is far more impressive to me than a computer playing "Go". But even bluffing doesn't necessarily show "emergent properties" related to intelligence, because the machine was trained to bluff.

The only AI that I've been significantly impressed with is one designed for designing circuit boards. It was a program created over a decade ago that would figure out more optimal layouts for PCBs (Printed Circuit Boards). After researchers had a prototype up and running for an incredibly long time, it seemed to come up with more optimal layouts than some of the professional engineers. And it even started lumping elements in ways they had never seen before. To me, that's one of the only examples I've heard of where "emergent properties" (a phenomena we link with evolution) came out of a computer program--where it didn't just beat a human in speed, but in creating new structures that were superior AND DIFFERENT than the original human designs.

Here is why this version of AI is impressive to me: If a robot could be trained to understand its own circuitry, over time it could learn to upgrade itself with such a program. THAT seems like a little shining beacon for intelligent AI, but it's just one component of what we would consider to be intelligence.

You quoted:


Go is far more complex than chess, in that the number of possible positions on a Go board could exceed the number of atoms in the universe. That’s why an approach based on brute-force calculation, while effective for chess, just doesn’t work for Go.


This is just plain misleading. Chess and Go are both of the same complexity class called "EXPTIME-complete". But Go is a 19x19 board, where Chess is 8x8 board. So Go is just a "bigger version" of a problem that is of "equal complexity" to solve. Therefore, Go is NOT "more complex than chess". Go is simply a game of Chess that requires more memory. Chess already requires more memory than all of the RAM on planet Earth, so it doesn't really deviate from the same restrictions of Go. To be clear, Go IS A HARDER GAME because it is bigger, but "complex" has a strict definition, and Go is not more "complex". What does this practically mean? You take the exact same setup you used to solve Chess and add more hardware, then play Go. There was no additional "leap of logic" from Chess to Go.

Quantum mechanics would be a fantastic problem for modern AI. QM is mostly statistical calculations, which AIs excel at. I wouldn't bet that an AI would unlock the secrets of the universe, but it would free up researchers to do something other than brute force calculations, which is a waste of a physicist's time.

Also, in a way, you are right about intelligence being used more broadly than just "human intelligence". However, computer science developed a "gold standard" for measuring intelligence many, many years ago. Alan Turing created the "Turing Test": en.wikipedia.org...

So while many people talk broadly about intelligence in relation to machines, there is actually a standard that has been around since 1950 for classifying a machine as intelligent, and that standard implies human intelligence. You'd probably be hard-pressed to find many people who don't equate machine intelligence with human intelligence... but you're right that it is incorrectly used to be one-in-the-same.



You have A.I. systems that learn how to play games without instructions.
...
You have A.I. systems that take I.Q. tests


AI systems do NOT learn how to play without instructions. They do have initial conditions programmed in and are given the ability to move freely. At that point, they are allowed to play independently. They are also forced to vary their movements, often methodically, otherwise it might decide to play an infinite game by never moving.

If you taught a human all of the questions on an IQ test before the IQ test, the human would do better, too. And machines are pretty limited as to what questions they can answer. For example, if you show a picture of a piano to a human and ask "What's missing?", the human can break down the construction of a piano in their mind and make a methodical decision about the piano's design expectation vs its design reality. A computer can't say "the black keys are missing", because it would only see a picture that it classifies as a valid piano with a 96% probability. Could you train a computer to understand pianos on that level? I assume you could, but it wouldn't be very good at many other things, as the training time (and hardware) needed for a computer to learn a piano would not be trivially small--quite the opposite.

AI has an IQ of a 4 year old: www.bbc.com...
Proof that the age of 4 is a bad time to give a child an IQ test, as it doesn't correctly correlate with their IQ: infoproc.blogspot.it...

In short, BUSTED!



posted on Feb, 15 2017 @ 03:38 PM
link   
a reply to: Protector

This is Fantasy Island stuff. You're worse than the last poster who didn't know the difference between general intelligence and human intelligence. You both talk about buzzwords and you're living in the world of alternative facts.

Big Data isn't a buzzword, it describes a reality that the world of science is facing with the real growth of information.

What is big data?

Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone. This data comes from everywhere: sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals to name a few. This data is big data.


www-01.ibm.com...

This isn't a buzzword, it's the real growth of data that will grow even faster as the internet of things begans to saturate the market with smart homes and self driving cars.

Humans can't make sense of all this data so we need machine intelligence. This big data helps us with things like medical records and looking into new cures and identifying activity of terrorists. For instance with Twitter, terrorist activity can get lost in the data because there's around 500 million tweets per day and 200 billion tweets per year, you need machine intelligence going over the data in order to make connections.

Again, this isn't a buzzword, this is the real world.

I could live on Fantasy Island and say words like electromagnetism or gravity are just buzzwords but I would look like an idiot so I won't say anything like that.

It's also crazy how you and the other guy try to belittle AlphaGo. That's just asinine. There's a reason everyone from Elon Musk to Demis Hassabis the C.E.O. of DeepMind hailed this as a huge advancement in A.I.

This is because the algorithm made newer and better versions of itself. The whole idea behind A.I. and the intelligence explosion is that intelligent algorithms will quickly become super intelligent when it's able to create better versions of itself. This is exactly what happened with AlphaGo.

You have artificial intelligence that can write machine learning software.

AI Software Learns to Make AI Software


Progress in artificial intelligence causes some people to worry that software will take jobs such as driving trucks away from humans. Now leading researchers are finding that they can make software that can learn to do one of the trickiest parts of their own jobs—the task of designing machine-learning software.

In one experiment, researchers at the Google Brain artificial intelligence research group had software design a machine-learning system to take a test used to benchmark software that processes language. What it came up with surpassed previously published results from software designed by humans.

In recent months several other groups have also reported progress on getting learning software to make learning software. They include researchers at the nonprofit research institute OpenAI (which was cofounded by Elon Musk), MIT, the University of California, Berkeley, and Google’s other artificial intelligence research group, DeepMind.


www.technologyreview.com...

Again, this is the whole point behind A.I. and what could be an intelligence explosion. You have a set of intelligent algorithms that can replicate itself and the new version will be better than the old version of itself. This is what happened with AlphaGo and it's why A.I. Researchers were so excited about what happened.

Also, like I said, Scientist are using AlphaGo to solve EXTREMELY COMPLEX problems in Physics. They have designed neural net based on AlphaGo that could solve the quantum state of a many particles at once.

THIS IS HUGE!

It could transport us 50-100 years into the future in terms of technology and scientific understanding.

A many particle system of just 100 particles, can have a million trillion trillion spin states the particles can inhabit. Our most powerful computers can solve only 48 of these spin states. The AlphaGo neuro net will solve the quantum state of a many particle system and will learn enough to make connections and give us insights into this multi-particle system that will just be HUGE.

Link AlphaGo
edit on 15-2-2017 by neoholographic because: (no reason given)



posted on Feb, 15 2017 @ 07:00 PM
link   

originally posted by: neoholographic
Yes it was and this is why people in the field of Physics are so excited. Again, these are real world problems that will be solved by a system like AlphaGo. It's ASININE to try to act like this is just something simple. It's easy to say this on a vacuum on a message board but again, you're not providing a shred of evidence to refute what has been said.


No, it wasn't. Some new AI technique would be a breakthrough. Simply combining what is already there is not a breakthrough. It's an improvement, but not all improvements are breakthroughs.


This is a huge steps towards strong artificial intelligence.


Care to link something substantial rather than a popsci article? Those articles aren't meant to be technically accurate, they're meant to sell the science and get people excited about it. Not teach you about the gritty reality of what AI is, and how it works. Companies want to sell you the idea that Cortana is a sentient being who knows how to assist you. It's an illusion though.

Here, if you're really interested in AI, not just the illusion of what these companies are selling but the actual specifics of how it functions, it's limitations, and the strengths/weaknesses of various approaches, here's some resources:
www.amazon.com...
www.amazon.com...
www.amazon.com...=sr_1_7?ie=UTF8&qid=1487205247&sr=8-7&keywords=book+artificial+intelligence

Those are books I've used in my classes, they're all excellent.

Here's an entire semesters worth of lectures you can sit through.
ai.berkeley.edu...
Here's the associated slides
ai.berkeley.edu...

To throw some words back at you from earlier. Perhaps it's you that should do some reading. I just provided you with more than enough sources to get started. Best of all, you don't even need to know how to code to just listen to the lectures since it's all just video and theory.



You have Artificial Intelligence that outperforms humans on an I.Q. tests.


Which means what exactly? IQ tests have been completely discredited as any sort of measure of intelligence.


originally posted by: neoholographic
Big Data isn't a buzzword, it describes a reality that the world of science is facing with the real growth of information.


No. Big Data is 100% a buzzword. A couple years ago it was just called database management. You see, as databases grow larger, it becomes a bigger challenge to store, find, and retrieve information in an acceptable time span. Big data revolves around the idea that you have ginormous databases, and that they're able to build custom hardware/software solutions to make unweildy amounts of data usable. For example, one of the problems they try to address is the fact that it's virtually impossible for a computer to write 4 TB of data with a 0% error rate. So they set up arrays of drives to check each byte, and ensure accuracy through majority rule.

The reason big data is a because most companies describe themselves as big data if they're working with any sort of data gathering/processing at all. In reality there's only a handful of companies in the world that are truly working with huge data sets. For example, the way Google or even just a subsidary like Youtube have to store their data is a lot different from funnycatvideos.com has to store theirs, even though both are in the web video sector, and both will likely call themselves big data.


This is because the algorithm made newer and better versions of itself. The whole idea behind A.I. and the intelligence explosion is that intelligent algorithms will quickly become super intelligent when it's able to create better versions of itself. This is exactly what happened with AlphaGo.


It didn't make newer and better versions of itself. It changed the weighting of various factors to make certain actions more or less likely to occur. The basic algorithm never changed.



posted on Feb, 16 2017 @ 02:44 AM
link   
a reply to: Aazadan

You and the other guy live in a fantasy world where real science is just a buzzword. It's asinine and makes no sense. Everything that challenges and refutes what you're saying is just a buzzword. Don't you see how silly that sounds? You live in a world of alternative facts.

Artificial Intelligence does well on an I.Q. test.

You: I.Q. tests are meaningless.

These systems are clearly intelligent and human intelligence is something different than general intelligence.

You: Intelligence is just a buzzword

Big Data is a huge part of our world and Data Scientist are helping to change the world everyday.

You: Big Data is just a buzzword

AlphoGo is a breakthrough that is hailed by AI Researchers around the world.

You: I don't know much about this field but it's just a simple algorithm that anyone can do yet Google paid over $500 million for DeepMind instead of just getting a M.I.T. student to write these simple algorithms.

Alpha go gets better at playing Go by playing 13 million games against itself and newer versions of itself beat older versions of itself according to Demis Hassabis the C.E.O. of DeepMind.

You: It didn't make better versions of itself and you should believe me even though I've said I don't know much about A.I. Research instead of the guy who helped design the system and whose company got over $500 million dollars.

You or the other guy said. the number of possible configurations on the board game Go doesn't add up to more than the number of atoms in the universe.

Here's a video where Demis says this and also talks about creating general purpose algorithm.



Here's more about Demis:

At the ages of 13 Hassabis reached the rank of chess master, and was the second-highest-rated player in the world under 14 at the time – beaten only by the Hungarian chess grandmaster and strongest female chess player in history, Judit Polgár.

In 1999, aged 23, he won the Mind Sports Olympiad – an annual international multi-disciplined competition for games of mental skill. He won it a record five times before retiring from competitive play in 2003.

Hassabis received his PhD in cognitive neuroscience from University College London in 2009. He continued his neuroscience and AI research at the Gatsby Computational Neuroscience Unit at UCL as a Wellcome Trust research fellow. He was also a visiting researcher jointly at MIT and Harvard.


www.theguardian.com...

In your mind though, I should believe your alternative reality when you say you don't know much about this research instead of the guy who sold his company to Google and is one of the leading Researchers in this area.

You're living in a fantasy world where everything real that challenges your beliefs are called buzzwords.
edit on 16-2-2017 by neoholographic because: (no reason given)



posted on Feb, 16 2017 @ 08:33 AM
link   
a reply to: neoholographic

You still haven't linked a single scholarly article on the subject, or provided any credentials at all to say you understand the field you're going on about. All you're linking is entertainment articles with no substance.

Also yes, I'm not a professional AI researcher, by definition I am not an expert in the field (to work entry level in AI research, you typically need a Masters in CS focused on ML, to be an expert you need decades of experience and a Phd). However, I'm not totally clueless either as I understand both the math and the techniques involved.

And even if you want to dismiss that, go back to page 1, Riffrafter I think it was spoke up as well and told you that reality is different from how your articles are presenting it to you.



posted on Feb, 16 2017 @ 11:31 AM
link   
a reply to: Aazadan

More Fantasy Island stuff.

Everything article I posted has comments from the Scientist that carried out the research and links to the research. What this shows me that you didn't even bother to read the research and studies that supports every article I posted.

I posted videos of the actual research plus the C.E.O. of the Company that carried out the research agreeing with what I'm saying. The problem is, you try to regulate any evidence that doesn't agree with your belief to something meaningless or it's a buzzword. This is called lying to yourself.

When you're presented with facts that goes against your belief, you don't try to refute the evidence but you try to blindly dismiss the evidence. This is just asinine.

When I presented evidence of A.I. and I.Q. tests, you couldn't refute the evidence so you said I.Q. tests are meaningless. So the Researchers who carried out the Research are just idiots because I.Q. tests are meaningless. Again, this is just asinine. You can't refute the evidence so you try to dismiss it out of hand. That's called lying to yourself.

When I talked about A.I. and the growth of Big Data. You didn't try to refute any evidence presented, you said Big Data was just a buzzword. Again, this is just lying to yourself. Since you can't refute the evidence, you want to try to dismiss it by calling it just a buzzword. It's a High School debate tactic that's just silly.

Most people listen to facts and then adjust their belief accordingly. There's others though who say FACTS BE DAMED and I'm going to bury my head in the sand and believe what I want to believe. Sadly you fall into the latter category.
edit on 16-2-2017 by neoholographic because: (no reason given)



posted on Feb, 16 2017 @ 01:39 PM
link   
a reply to: neoholographic

Whatever. You win. Go, embrace ignorance. Fear the impending machine uprising and all those dirty algorithms.

Meanwhile, I'm going to continue to live in the real world, and actually understand how things function, building my own toys that utilize these concepts.

Shame you'll never be able to do more than read about others accomplishments because when people try to teach you, you tell them they're wrong.



posted on Feb, 17 2017 @ 12:03 AM
link   
a reply to: Aazadan

Your signature does state, "Every madman is convinced of his own rationality."

Maybe neoholographic is a neural network learning how to argue on forums.

I found the "papers" (arXiv articles):

Regarding:

Otkrist Gupta, a researcher at the MIT Media Lab, believes that will change. He and MIT colleagues plan to open-source the software behind their own experiments, in which learning software designed deep-learning systems that matched human-crafted ones on standard tests for object recognition.
- www.technologyreview.com...


From: arxiv.org...

During the entire training process (starting at  = 1.0), we maintain a replay dictionary which stores (i) the network topology and (ii) prediction performance on a validation set, for all of the sampled models. If a model that has already been trained is re-sampled, it is not re-trained, but instead the previously found validation accuracy is presented to the agent. After each model is sampled an trained, the agent randomly samples 100 models from the replay dictionary and applies the Q-value update defined in Equation 3 for all transitions in each sampled sequence. The Q-value update is applied to the transitions in temporally reversed order, which has been shown to speed up Q-value convergence (Lin, 1993).


LOL! They store the result in a dictionary, then use that value on the next training run. Wow! What an advancement! hahahahha. ... yes, I know this wasn't the purpose of the paper... just funny.

Actually, they took lots of little steps to speed up training across their network topologies:

During the model exploration phase, we trained each network topology with a quick and aggressive training scheme. For each experiment, we created a validation set by randomly taking 5,000 samples from the training set such that the resulting class distributions were unchanged. For every network, a dropout layer was added after every two layers. The i-th dropout layer, out of a total n dropout layers, had a dropout probability of i/2n. Each model was trained for a total of 20 epochs with the Adam optimizer (Kingma & Ba, 2014) with β1 = 0.9, β2 = 0.999, ε = 10^−8. The batch size was set to 128, and the initial learning rate was set to 0.001. If the model failed to perform better than a random predictor after the first epoch, we reduced the learning rate by a factor of 0.4 and restarted training, for a maximum of 5 restarts. For models that started learning (i.e., performed better than a random predictor), we reduced the learning rate by a factor of 0.2 every 5 epochs. All weights were initialized with Xavier initialization (Glorot & Bengio, 2010). Our experiments using Caffe (Jia et al., 2014) took 8-10 days to complete for each dataset with a hardware setup consisting of 10 NVIDIA GPUs.
After the agent completed the  schedule (Table 2), we selected the top ten models that were found over the course of exploration. These models were then finetuned using a much longer training schedule, and only the top five were used for ensembling. We now provide details of the datasets
and the finetuning process.


So they gather pre-existing AI network topologies for solving popular CNN (Convolution Neural Network--for visual data) problems. Then they walk the layers of those CNNs, mixing and matching, until one of the topologies shows "learning". Later, the most efficient one is selected, and the final result tends to be a network topology that is EQUAL to the best performing topology in the pre-existing network topologies.

I guess that's interesting, but the reason why they match/equal the older human designed AI network topologies is because the AI is walking the human created ones and looking for a more optimal solution by mixing and matching the human created ones. I get the idea of "using optimal solutions to create more optimal solutions", but I don't see an advancement in AI, here. It really seems like a new approach for selecting the most ideal solution from a bucket of ideal solutions across many similar domains (in this case, visual data focused CNNs).

No wonder the chairman Google said their AI research is still in its infancy: www.itpro.co.uk...

And secondly,

One set of experiments from Google’s DeepMind group suggests that what researchers are terming “learning to learn” could also help lessen the problem of machine-learning software needing to consume vast amounts of data on a specific task in order to perform it well.
- www.technologyreview.com...


This one shows more potential, at first glance... arxiv.org...


The key result, which emerges naturally from the setup rather than being specially engineered, is that the recurrent network dynamics learn to implement a second RL procedure, independent from and potentially very different from the algorithm used to train the network weights. Critically, this learned RL algorithm is tuned to the shared structure of the training tasks. In this sense, the learned algorithm builds in domain-appropriate biases, which can allow it to operate with greater efficiency than a general-purpose algorithm.


So the AI determines a more optimal Reinforcement Learning (RL) approach than its own initial general conditions--that is, tweaks its own variables to bias itself away from its initial setup to optimize for a specific domain. That could be an advantage in an ideal scenario. Although, it also sounds like it'd have greater problems with any input data (training set) outside of its optimized domain. Meaning, the general-purpose algorithm would probably be more optimal in those cases.

This sounds like "AI task specialization". Maybe an AI for a factory robot would be able to train itself much faster for particular factory tasks, since they are largely within the same domain (I assume).

IN SUMMARY, the AI programs are not programming themselves, yet... they are still just updating their variables. ALSO, they appear to still be brute forcing optimization problems.
edit on 2017-2-17 by Protector because: quote formatting



posted on Feb, 17 2017 @ 12:33 AM
link   

originally posted by: Protector
So the AI determines a more optimal Reinforcement Learning (RL) approach than its own initial general conditions--that is, tweaks its own variables to bias itself away from its initial setup to optimize for a specific domain. That could be an advantage in an ideal scenario. Although, it also sounds like it'd have greater problems with any input data (training set) outside of its optimized domain. Meaning, the general-purpose algorithm would probably be more optimal in those cases.


I tried doing something similar with my AI. It can take a list of cards and build an MTG deck out of them, then play it. Like any strategy game though, there's something of a slider between taking aggressive and defensive lines, and how that plays into your strategy. For a while I was trying to find a way to dynamically build decklists from a pool of cards, and then adjust the optimal strategy based on that specific list. I had to give up, after a while I realized I had no way to solve the problem, as soon as one variable changed (my deck list, my strategy, opponent deck list, opponent strategy) all previous assumptions fall apart and it has to start from scratch. Someone could probably do it, it's on the shortlist for the next task they're going to try and solve with DeepMind. That someone won't be me though.



posted on Feb, 17 2017 @ 03:00 AM
link   
a reply to: Protector

A long winded post filled with nothing.

You haven't refuted anything I've said about intelligence.

You haven't refuted anything I've said about Big Data.

You haven't refuted anything I've said. All you have done is post things out of context. Let's look at the abstract you quoted from. Which you failed to post. The reason you failed to post it is because you want to pull things out of context. In the abstract the Researchers talk about areas of advancement and areas where they need improvement.

What you just did was isolate areas where improvement may be needed which is just a lie by omission.


In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.


This makes everything you quoted just look dishonest. This is why you need to post the Abstract because the Abstract tells you the goals the Researchers are trying to obtain.

First off, they immediately support what I'm saying. They say deep reinforcement learning has obtained superhuman performance in a number of challenging domains.

BOOM!

There's a reason people like you quote things out of context and don't list things like the Abstract.

So in certain tasks, deep learning has had superhuman performance when you look at areas like Atari or Go. Exactly what I have been saying. Next we get to the purpose of this paper and these experiments.

However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks.

Again, nothing you said has refuted anything I have been saying. It says, the linitation in these areas is the need for massive amounts of training data. This can use a lot of computing power as pointed out in the article you quoted from.


Bengio says the more potent computing power now available, and the advent of a technique called deep learning, which has sparked recent excitement about AI, are what’s making the approach work. But he notes that so far it requires such extreme computing power that it’s not yet practical to think about lightening the load, or partially replacing, machine-learning experts.

Google Brain’s researchers describe using 800 high-powered graphics processors to power software that came up with designs for image recognition systems that rivaled the best designed by humans.


www.technologyreview.com...

Again, a lie by omission. What you have quoted doesn't refute anything. The Researchers are simply saying that deep learning requires the use of massive amounts of data which takes up a lot of computing power vs. humans who still use data to learn but we can adapt more quickly while using very little data.

This is what he means by "learning to learn." The system learns it's just not as efficient like humans. Again, this is ecactly what I have been saying. I never said these systems have reached human level intelligence but they are intelligent and they learn.

This could also be remedied by quantum computing. For instance, AlphaGo played 13 million games against itself. It improved but it took 13 million hands. I remember an interview from a Professional Poker Player who did the same thing. He said he played thousands of games aginst himself when he was learning how to play poker.

In this instance, both learned, it just took the human poker player less hands to adapt more quickly and the computer took 13 million hands which took up more processing power.

So if we had a powerful quantum computer, the machine learning system would be just as efficient as the human brain because a quantum brain would use very little processing power to play 13 million hands, where today are most powerful supercomputers can just look at 48 spin states of the wave function of 100 particles which could inhabit a million trillion trillion spin states.

LET ME SAY THAT AGAIN!

This just illustrates what this paper is talking about. It takes a lot of power to process these things at once or in parallel and when you reach just 48 possible spin states, it takes our most powerful supercomputers. This shows the power of AlphaGo and of quantum computing. AlphaGo is being used to make connections with a million trillion trillion different spin states that particles can inhabit. This is HUGE and also the paper talks about how this can be used in Neuroscience not just in a factory.

Another lie by omission that you told.

Now, let's go to the solution. Since we don't have a quantum computer to carry out these tasks, Researhers are doing their job and looking for other ways these LEARNING SYSTEMS can learn faster and adapt quicker.

So they INTRODUCED a new approach to tackle this problem. Again, another lie by omission. This isn't refuting anything that I have said, it's just introducing a new approach called deep meta-reinforcement learning. Now, let's look at the conclusion:


CONCLUSION

A current challenge in artificial intelligence is to design agents that can adapt rapidly to new tasks by leveraging knowledge acquired through previous experience with related activities. In the present work we have reported initial explorations of what we believe is one promising avenue toward this goal. Deep meta-RL involves a combination of three ingredients: (1) Use of a deep RL algorithm to train a recurrent neural network, (2) a training set that includes a series of interrelated tasks, (3) network input that includes the action selected and reward received in the previous time interval. The key result, which emerges naturally from the setup rather than being specially engineered, is that the recurrent network dynamics learn to implement a second RL procedure, independent from and potentially very different from the algorithm used to train the network weights. Critically, this learned RL algorithm is tuned to the shared structure of the training tasks. In this sense, the learned algorithm builds in domain-appropriate biases, which can allow it to operate with greater efficiency than a general-purpose algorithm.


Cont.
edit on 17-2-2017 by neoholographic because: (no reason given)



posted on Feb, 17 2017 @ 03:05 AM
link   

This bias effect was particularly evident in the results of our experiments involving dependent bandits (sections 3.1.2 and 3.1.3), where the system learned to take advantage of the task’s covariance structure; and in our study of Harlow’s animal learning task (section 3.2.2), where the recurrent network learned to exploit the task’s structure in order to display one-shot learning with complex novel stimuli.


Just lies by omission all over the place. This study did show improvement using deep meta reinforcement learning and it learned how to adapt faster from a smaller data set. This is the point of the study. Youre whole post is a mish mash of nonsense that's taken things out of context.

So using 5,000 samples vs. playing 13 million hands is an improvement using this technique which again is the point of the study but I don't think you understand this and that's why you didn't post the abstract and you just SELECTIVELY quoted areas out of context and in a way that has nothing to do with the purpose of the study.
edit on 17-2-2017 by neoholographic because: (no reason given)



posted on Feb, 19 2017 @ 04:24 PM
link   
a reply to: neoholographic

No one here is claiming that computers can't do things better than humans, but what you're failing to grasp is that it's not "intelligence" in the practical sense of the word. We're talking about a machine that brute-forces its way through problems, while humans have the ability to abstract information from experience onto different problems to solve them with a much higher efficiency. A human being can play a first person shooter and apply that knowledge to play any other first person shooter game with significantly increased efficiency; a computer cannot. A computer can only solve a problem it was specifically designed to look for or structured to find.

Just to give you an idea....If you teach a human being that a "high score" is the goal, I guarantee he won't have to iterate over the game very long (depending on the complexity of the game) to figure out an efficient method for scoring high. The reason for this, is because humans have what is called abstract thought, ingenuity, and experience. Every other game they've played and every other circumstance they've been in (learned), helps them better understand the game. A computer doesn't understand anything on that level. It sees 1's and 0's and tries to make sense of them based on the algorithms being employed like the smart people in this post have been telling you.

Brute-forcing billions of tries at a game is not "learning", it's trial and error. If it was able to actually learn from this experience, then every other game it played in the genre would (like a human being) take only a few tries to find an efficient path to scoring high. Now the one thing a computer can do that a human can't do (in a timely manner), is attempt to try every permutation for a game environment until they reach the absolute perfect method for beating the game with the highest score. For a lot of games, it could take years for a human to master a "perfect run" or "perfect performance". This is why it's better to use AI for tasks that would take humans too long, like sorting through petabytes of data, looking for specific patterns.

This is not "intelligence", this is cultivated brute-force.

I want to shout out to Azadan and Protector for their insight on AI; it was very illuminating. Azadan, I was wondering if I might be able to pick your brain in PM? I have a project I'm working on that I'd love to have your advice on, though I'm afraid you might not be coming back to this thread haha for obvious reasons.
edit on 19-2-2017 by Aedaeum because: (no reason given)



posted on Feb, 19 2017 @ 06:14 PM
link   
a reply to: Aedaeum

Your reply comes from your total misunderstanding of these issues. These things are advancing rapidly. You don't understand intelligence in any way. Scientifically speaking, human intelligence isn't the only level of intelligence. That's why people talk about strong A.I. or human level A.I.

Where in this thread have I said these systems have reached human level intelligence? We're moving in that direction and that's why people in this area of research were so excited about AlphaGo. Human Level intelligence isn't the only intelligence scientifically speaking. This is why these systems are called intelligent systems.You said:

No one here is claiming that computers can't do things better than humans, but what you're failing to grasp is that it's not "intelligence" in the practical sense of the word.

This makes no sense at all.

I'm not talking in any subjective "practical" sense of the word. I'm talking in the terms of Science. These systems learn and they're intelligent they just haven't reached human level intelligence.

Learning by trial and error is what we do. These systems aren't programmed to learn a specific behavior. This is why you had some systems learning how to be more aggressive while other systems learned how to be more peaceful in one study. The programmers didn't tell them to become more aggressive or how to respond when they became more aggressive. They learned this from taking information from their environment.

You talk about brute force, but again that's just nonsense. The only reason why this matters is because of computing power and as I pointed out in the last published article posted that was misrepresented, the Researchers tried to figure out ways to reduce this. That doesn't really matter though because quantum computing will throw that problem out of the window. So an intelligent machine playing 13 million hands would take up very little computing power and will still learn new strategies in the game of Go.

These systems are getting better and better, here's recent articles on the systems playing Poker.

Inside Libratus, the Poker AI That Out-Bluffed the Best Humans

Libratus, the poker-playing AI, destroyed its four human rivals

Oh the humanity! Poker computer trounces humans in big step for AI

Why Poker Is a Big Deal for Artificial Intelligence


Sadly, you and the others just don't understand the Research in these areas and you're replacing your "practical" subjective use of the term intelligence which is meaningless to how it's used scientifically.

AI Decisively Defeats Human Poker Players


Humanity has finally folded under the relentless pressure of an artificial intelligence named Libratus in a historic poker tournament loss. As poker pro Jason Les played his last hand and leaned back from the computer screen, he ventured a half-hearted joke about the anticlimactic ending and the lack of sparklers. Then he paused in a moment of reflection.

“120,000 hands of that,” Les said. “Jesus.”

Even more important, the victory demonstrates how AI has likely surpassed the best humans at doing strategic reasoning in “imperfect information” games such as poker. The no-limit Texas Hold’em version of poker is a good example of an imperfect information game because players must deal with the uncertainty of two hidden cards and unrestricted bet sizes. An AI that performs well at no-limit Texas Hold’em could also potentially tackle real-world problems with similar levels of uncertainty.

“The algorithms we used are not poker specific,” Sandholm explains. “They take as input the rules of the game and output strategy.”


In other words, the Libratus algorithms can take the “rules” of any imperfect-information game or scenario and then come up with its own strategy. For example, the Carnegie Mellon team hopes its AI could design drugs to counter viruses that evolve resistance to certain treatments, or perform automated business negotiations. It could also power applications in cybersecurity, military robotic systems, or finance.


spectrum.ieee.org...

THIS IS HUGE NEWS!

The system is learning and it's not Poker specific. As I mentioned earlier, this is one of the goals of A.I. To make intelligent systems that can learn across all areas. It came up with it's own strategy and the only input was the rules of Poker. The Programmers didn't tell it what strategy to use or which strategy to learn in a game where they have to bluff because it has imperfect information. There goes your brute force. How can it be simply calculating information from it's environment when it doesn't have all of the information? It's not even algorithms that are poker specific.



posted on Feb, 19 2017 @ 06:59 PM
link   
a reply to: Aazadan

One of the guys I knew in college was a champion Magic player from the midwest. He told me that all of the best winning strategies for magic are really focused around a "lack" of balance... an extreme strategy that maximizes domination, either in a very short game or a long game.

For instance:
1. Have a hand that does almost nothing but cripples the opponents ability to play. They have to throw away cards to their graveyard so they can't put up a fight or build a strategy, or use their mana, or attackers are always tapped. etc. Then, you just need 2 or 3 creatures (or special spell cards) in the entire deck to attack with.
2. Have creature cards with either high attack or high block that have very low mana cost. These cards allow you to create a powerful attack or defense very early on. Remember, most top players have a lot of rare, expensive cards where they can do this. The attackers often can't defend and the defenders might not attack, but it's just about playing the quick game to stop your opponents strategy--the beginning of a game is often kill quick or be killed.

In essence, these strategies are just about statistical resource management. The game completely changes with each mana card in play, so he separated out the game in that fashion. Your opening game was when you had almost no mana in play, the mid game was building mana and putting up either a quick attack or crippling the opponent. The long game (if even necessary) was big attacks that the opponent couldn't stop. And most games didn't take very long to play. To him, every mana card was a new level to his game play. He statistically knew which cards would likely be in his hand as every mana entered play. That's sometimes why he'd have so few attack cards. He only needed big attackers at the very end, just to do the final blows. He said he often ended the game in 2 attacks.

These strategies were really interesting, such as continually pulling cards out of his graveyard to use over and over again. And the strategies had been developed by players all over the world to win various tournaments and defeat different deck styles. He also told me that certain cards and decks were banned over the years (that he used) because they were an unfair advantage... winning an entire tournament with little effort.

I believe many of his strategies came from forums where people posted why a handful of cards/moves resulted in a big win or a big loss. By the time he got to a tournament, he already had everything planned out... he'd have tested the deck against people in local games, tweaked as needed, recalculated the balance of his cards, then just let the cards do all of the work.

The guy had tens of thousands of dollar of cards, nearly every card made (that had value). He also said his decks technically had huge weaknesses that the opponent would never have time to exploit. Sometimes at the final round of a tournament, he'd play a guy with an identical deck to his own, and it was only a matter of who pulled a single card before the other, as to who would win... and both players knew it, because it was just statistics by that point.

Good luck with your AI. Magic gameplay drastically changes depending on strategy.



posted on Feb, 19 2017 @ 09:00 PM
link   
Here's more about the poker strategy from Libratus.

First, the AI’s algorithms computed a strategy before the tournament by running for 15 million processor-core hours on a new supercomputer called Bridges.

This is just a problem of computing power. These algorithms and quantum computing will solve this as seen with the example of the wave function of 100 particles and Researchers using AlphaGo to solve problems in these areas.

Second, the AI would perform “end-game solving” during each hand to precisely calculate how much it could afford to risk in the third and fourth betting rounds (the “turn” and “river” rounds in poker parlance). Sandholm credits the end-game solver algorithms as contributing the most to the AI victory. The poker pros noticed Libratus taking longer to compute during these rounds and realized that the AI was especially dangerous in the final rounds, but their “bet big early” counter strategy was ineffective.

Very important because the system wasn't programmed to use this strategy or any other strategy. It wasn't even Poker specific. It learned to use this strategy.

Third, Libratus ran background computations during each night of the tournament so that it could fix holes in its overall strategy. That meant Libratus was steadily improving its overall level of play and minimizing the ways that its human opponents could exploit its mistakes. It even prioritized fixes based on whether or not its human opponents had noticed and exploited those holes. By comparison, the human poker pros were able to consistently exploit strategic holes in the 2015 tournament against the predecessor AI called Claudico.

So the system learned by looking at mistakes it's human opponents made but also looking at mistakes it made and then going back and seeing if their opponents exploited that mistake. It did all of this and it wasn't programmed to play poker. It learned these things.

spectrum.ieee.org...
edit on 19-2-2017 by neoholographic because: (no reason given)




top topics



 
17
<< 1  2  3    5 >>

log in

join