It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

How algorithms secretly run the world

page: 3
17
<< 1  2    4  5 >>

log in

join
share:

posted on Feb, 12 2017 @ 03:35 AM
link   

originally posted by: Aazadan

There's no such thing as intelligent algorithms, there's just algorithms. When "intelligence" is applied what's happening is that an algorithm takes an input, puts that input through some formulas that change it, and then repeat the process using that changed input.


Isn't that what the neocortex in our brains is doing as well? Evaluating the information that comes in through our senses and then using this information to refine our understanding of the external world?

In theory we can describe the biochemical operations performed within the neocortex and analyze how neurons fire depending on the particular input/stimulus. Their functioning is not unlike that of algorithms and the result of many such operations in parallel each milisecond can result in intelligent behaviour for all we know.
edit on 12-2-2017 by jeep3r because: spelling




posted on Feb, 12 2017 @ 08:09 AM
link   

originally posted by: neoholographic
You said:

There's no such thing as intelligent algorithms


Just because some people are using it as a buzzword doesn't mean the literal interpretation of that exists. They're not intelligent.

Perhaps you would like to write one out and point out to me where the intelligence lies? Perhaps I should write one for you, and have you show me?

Look at how they actually function, they're no more intelligent than your phone is smart.
edit on 12-2-2017 by Aazadan because: (no reason given)



posted on Feb, 12 2017 @ 08:17 AM
link   
a reply to: jeep3r

I don't know enough about how the brain functions to say. I'm going to lean towards it being pretty different though, because the learning process isn't at all the same. Computers have no ability to reason, "training" is very close to just brute forcing and remembering the outcomes of a billion little variations on the same starting point, then doing that again and again. Not just humans, but every animal we've ever observed has been able to actually reason cause and effect without those types of trials.



posted on Feb, 12 2017 @ 09:18 AM
link   
a reply to: neoholographic

Intelligent algorithms are a nickname used by Johns Hopkins to describe any algorithm that applies to an area of research known as Intelligent Systems. They specify this if you go further into their course descriptions.



To introduce the student to the theory, design and analysis of Intelligent Systems (IS) from an engineering perspective; the primary technical emphasis of the course is on Fuzzy Systems, Genetic Algorithms, Particle Swarm and Ant Colony Optimization Techniques, and Neural Networks. ... IS methods like Genetic Algorithms, Particle Swarm and/or Ant Colony Optimization Techniques. The link between fuzzy systems and neural systems is also highlighted.


Fuzzy Logic - applying rules over a range of values, as opposed to a "YES/NO" or "ON/OFF" approach of Boolean logic.

Example:
From: en.wikipedia.org...


IF temperature IS very cold THEN stop fan
IF temperature IS cold THEN fan speed is slow
IF temperature IS warm THEN fan speed is moderate
IF temperature IS hot THEN fan speed is high


In the above case, there are 2 ranges. Temperature is a range, as is fan speed. So a logic controller needs to apply the correct operating fan speed to the correct temperature range. As a practical example, many laptop fans start at about 2,000 rpm. This will move a particular amount of air and cool the processor down by a specific amount each second. For each additional cubic foot/meter of air you move, the greater you will be able to cool down the processor, but the faster the fan has to operate and the more sound the machine will make. Fuzzy logic may operate on a non-standard gradient, meaning you may not have a straight line to follow a temperature rise. It may be a Sigmoid function (known as an S-curve), or some other odd function.

Genetic algorithms were described by Aazadan in his response to my TensorFlow post. Here he states the approach known as "Mutation", one aspect of genetic algorithms:


Another common tactic, with genetics specifically (what I talked about previously) is to introduce the concept of mutation, where you randomly decide to pick random bits in your bitstring, and flip them from 0's to 1's or 1's to 0's.


These algorithms are inspired by processes seen in nature, specifically natural selection, where a computer will continually update itself (as I described in the TensorFlow post) to become more optimal, meaning it has higher fitness.

A Particle-Swarm is method used by computer scientist to solve the problem of "local vs global minimas", what Aazadan and I were writing about. A global minima would be the MOST OPTIMAL solution, but it may be harder to find.

Watch this animation, where a bunch of particles spread out over a surface and get "stuck" in areas where they settle into the various minimas across the surface:


So we can take the results of a Particle-Swarm and feed those minimas into our AI algorithm so that it more quickly converges and tests each minima to find the BEST, or GLOBAL, minima (hopefully). For reference, it may NOT be the minima that has the most particles in it. That minima just has the largest opening. It isn't necessarily the most optimal. This is a pitfall of AI (and standard optimization mathematics). How do we best find the most optimal route? It is an incredibly hard problem, computationally speaking.

Moving on to Neural Networks, they are described by my TensorFlow post. Just go read that.

And finally, Ant Colony Optimization techniques are pretty nifty. They use statistics and "swarm intelligence" to more quickly optimize a very hard class of problem known as NP-complete problems. They can be used for other variations of problems, too. NP-complete problems take a very long time to solve (Traveling Salesman, Knapsack Problem, Scheduling, etc) and the Ant Colony Optimization gives a solid method for tackling those problems more optimally. However, these problems are still NP-complete, which means they still take an incredibly long time to solve as they grow--eventually taking longer than the life of the universe to solve. To solve certain problems, you just need "good enough" and not necessarily "perfect". In other words, a GREAT ANSWER that isn't necessarily the BEST ANSWER, but it is still often good enough because it makes all of your future operations take less time. This is called Heuristics--applying rules and optimizations to solve problems that might otherwise be unsolvable.

And now you understand the basics behind the cutting edge research into AI.



posted on Feb, 12 2017 @ 01:14 PM
link   

originally posted by: Protector
So we can take the results of a Particle-Swarm and feed those minimas into our AI algorithm so that it more quickly converges and tests each minima to find the BEST, or GLOBAL, minima (hopefully). For reference, it may NOT be the minima that has the most particles in it. That minima just has the largest opening. It isn't necessarily the most optimal. This is a pitfall of AI (and standard optimization mathematics). How do we best find the most optimal route? It is an incredibly hard problem, computationally speaking.


To go more into this, something like the traveling salesman has huge implications for a solution. If you could find a polynomial time solution to this problem, you would become a billionaire almost overnight. You would become a multimillionaire nearly instantaneously if you could even speed up the current process we use for this by even a small percentage (say 10%). It's a problem that comes up any time travel is involved. From finding an efficient package delivery route, to routing air traffic in a way that minimizes costs, to sending data packets over networks.

It is a very, very difficult problem. As are many other NP Complete problems, which are basically problems where creating a solution is tough but verifying the solution is easy.
edit on 12-2-2017 by Aazadan because: (no reason given)



posted on Feb, 12 2017 @ 05:39 PM
link   
a reply to: Aazadan

You said:

Just because some people are using it as a buzzword doesn't mean the literal interpretation of that exists. They're not intelligent.

It's not use as a buzzword, they're called intelligent algorithms for a reason. You just don't understand research into artificial intelligence. Anyone who doesn't know the relationship between intelligent algorithms and big data as it relates to A.I. needs to read a book and learn something before discussing these issues.

Yes, it's intelligence and the whole idea is to create algorithms that mimic intelligence and therefore can learn from a subset of data without input of outside human intelligence.

Intelligence

a (1) : the ability to learn or understand or to deal with new or trying situations : reason; also : the skilled use of reason (2) : the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (as tests)


Why do you think Scientist are having these systems play games like Go and Poker? It's because these systems have to behave in an intelligent way and learn with each new game or hand. This is intelligence.

I think you're confusing consciousness and self awareness with intelligence and like I said you need to do some research. This is why some people fear what's called dumb A.I. This means you can have an intelligent system that's not conscious or self aware.

This is why DeepMind's system learned to play 49 different games of Atari using the same algorithms. This is because they wanted the system to learn how to play without any human intelligence teaching it how to play.


Google's artificial intelligence division has created a computer that can learn how to play video games and eventually beat humans at them.

Researchers showed the computer 49 games on the Atari 2600, a simple game console that was popular in the 1980s. They gave the computer no instructions on how to play the game, but instead forced it to watch and learn on its own. They set up a system that "rewarded" the computer for playing well, so it knew when it was improving.


www.businessinsider.com...





Again, the problem you're having is you can't separate intelligence from consciousness. These systems are mimicking intelligence.

Intelligence is easier to quantify and mimc even though we don't have a full understanding of intelligence. We almost have no understanding of consciousness and self awareness.

Intelligence

a (1) : the ability to learn or understand or to deal with new or trying situations : reason; also : the skilled use of reason (2) : the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (as tests)


Now Consciousness:

a : the quality or state of being aware especially of something within oneself
b : the state or fact of being conscious of an external object, state, or fact


If you understood the difference between these things and read some of the research in these areas, it would be easy to grasp.

A system can be intelligent without having a me(conscious) experience. So a system can be given a domain of information like the Atari Games or Poker and the system can learn how to play the games by mimicking intelligence through these algorithms. So an intelligent system can learn how to play an Atari game without having the human me experience of playing that Atari game.



posted on Feb, 12 2017 @ 09:01 PM
link   

originally posted by: neoholographic
It's not use as a buzzword, they're called intelligent algorithms for a reason. You just don't understand research into artificial intelligence. Anyone who doesn't know the relationship between intelligent algorithms and big data as it relates to A.I. needs to read a book and learn something before discussing these issues.


Which books would you suggest? I listed my experience with AI a couple posts ago. I even outlined some AI algorithms for you. If you've read these books, I promise you don't have to dumb it down for me, I know how to code, I have a CS degree, and I have experience with AI. By all means, explain it.



Why do you think Scientist are having these systems play games like Go and Poker? It's because these systems have to behave in an intelligent way and learn with each new game or hand. This is intelligence.


Why? Mostly because it's cool. It opens up some interesting problem solving approaches too. For example, with the AI that I built and use/modify as a hobby which plays MtG, I can play thousands of games in a couple minutes. That game is extremely high variance. As a result, I can swap the cards I'm playing with and quantatively evaluate even minor changes with proper sample sizes in an evening, when more traditional testing could take a group of people months to get an even less definitive result.



This is why DeepMind's system learned to play 49 different games of Atari using the same algorithms. This is because they wanted the system to learn how to play without any human intelligence teaching it how to play.


No. It was able to play several games because playing video games, especially retro games because there's a simple pattern to follow to do so. I even mentioned doing the exact same thing in a previous post. It was a homework assignment. It's a very simple algorithm.



Again, the problem you're having is you can't separate intelligence from consciousness. These systems are mimicking intelligence.


Mimic is a good word. It's a trick, they're made to look intelligent, but they aren't. You can do the exact same number crunching these algorithms and computers do. You can even do it with a slide ruler if you wish. That doesn't make the slide ruler intelligent. It's just a formula you can follow to solve general problems. It doesn't learn, it just modifies it's input as it goes. No different than graphing a sin function. Just because the sin function is able to change itself doesn't make it intelligent.



If you understood the difference between these things and read some of the research in these areas, it would be easy to grasp.


I think this is the problem, you and another poster before you this week (and usually every few weeks) pop up and rant about AI. They either link some futurist article about how the singularity is coming, or some luddite article about how we have to destroy the machines before armageddon. They approach AI from a philosophical perspective. I'm telling you right now as someone who actually builds this stuff. It is not intelligent, and no (current) approach can ever become intelligent. The current AI systems that are on the market were developed 50 years ago, and are extremely computationally inefficient. They're seeing use now because processors are fast, and we have nothing better to do with the CPU cycles. Current AI can solve some interesting problems that we couldn't solve before, but that's not because the machine is intelligent. It's because the problems take a lot of math operations to solve, and CPU's are fast enough to do it.

This could all change if/when some new AI breakthroughs happen. But I'll wait to comment on the technical side of that until it happens and I know how they do it. Until then, we'll see incremental improvements to specific problems like the Poker playing AI as researchers play mix and match with existing techniques. But such things will never be intelligent.



posted on Feb, 12 2017 @ 11:14 PM
link   
a reply to: Aazadan

It's obvious you don't know what you're talking about. You said:

No. It was able to play several games because playing video games, especially retro games because there's a simple pattern to follow to do so. I even mentioned doing the exact same thing in a previous post. It was a homework assignment. It's a very simple algorithm.

No, it's not a very simple algorithm. These are very complex algorithms that can not only play Atari games but beat Champions at Go and at Poker.

Again, the same algorithm learned how to play different games without instructions on how to play the game. This isn't the equivalent of a simple homework assignment and anyone who knew anything about these networks wouldn't say something so asinine.

Google paid over 500 million for DeepMind's technology in A.I. If this was just a simple algorithm that anyone can do, why not hire a second year student from Stanford.

Google forked out over $500 million for a little-known London startup called DeepMind in 2014 without specifying how the company's artificial-intelligence technology would be used to increase Google's revenues, which already run into tens of billions of dollars every year.

www.businessinsider.com...

Again, you don't know what you're talking about.

They created one set of algorithms that learned how to play 49 different games without instructions on how to play the games. Anyone that doesn't understand the importance of this doesn't understand A.I.

This is the holy grail of artificial intelligence. A super intelligence or a set of algorithms that can do all of these things. Right now you have algorithms that can detect skin cancer, play games or do predictive policing but super intelligence will be a set of algorithms that can do all of these things and replicate itself while making more intelligent versions of itself.

Again, A.I. is intelligent right now. Do I have to post the definition of intelligence again?

Intelligence

a (1) : the ability to learn or understand or to deal with new or trying situations : reason; also : the skilled use of reason (2) : the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (as tests)


You keep mixing intelligence with consciousness. Nobody is saying A.I. is having a self aware experience but it does mimic intelligence and it's not some cheap trick.

If it was some cheap trick you would be selling a $500 million dollar company to Google instead of typing on a message board on a subject you obviously don't know much about.

Ford just invested in artificial intelligence.

Ford to Invest $1 Billion in Artificial Intelligence Start-Up

www.nytimes.com...

If these things are so simple why aren't you out there getting a billion dollar investment from Ford?


The fact is Big Data and Artificial Intelligence is growing because both need each other to advance.

So this intelligence is like human intelligence without consciousness or self awareness. In fact, we don't fully understand our own intelligence.

So you will have test where these agents will be given a task to complete and in order to complete that task efficiently they may have to kill 1,000 other agents. Will some of these systems take the less efficient route in order to minimize killing other agents.

Like I said, you're equating human consciousness with intelligence. Human intelligence involves a lot of abstract reasoning. These systems are getting better at this as seen with the recent Poker game. In the Poker game, the systems had a lack of information and had to learn how to bluff like humans do.
edit on 12-2-2017 by neoholographic because: (no reason given)



posted on Feb, 13 2017 @ 02:52 AM
link   
a reply to: Aazadan

I wasn't going to say anything, but I just have to give you credit where credit is due. It's hard to argue against cognitive dissonance and you're doing an admirable job. I appreciate you sharing your actual experience on the subject, which enlightens those who are in the background taking mental notes. I'm tired of the AI fanboy's ranting and raving about how a pseudo-Skynet is going to end up wiping us off the earth or enslaving us - pure and utter nonsense imho.

It's nice to hear some real information from people who are actively engaged in the field, who understand its paradigms. So thank you for your contributions, they are not going unnoticed!



posted on Feb, 13 2017 @ 03:30 AM
link   
a reply to: Aedaeum

What contributions exactly?

This post is pretty general without any specifics.

Tell me, how will we stop the spread of intelligence? Do you even know what intelligence is?

It's simple, you can't control artificial intelligence because of Big Data. You have to give intelligent systems room to learn what humans don't and can't know because of the amount of data. Therefore machines have to increase with intelligence as Big Data grows.

There's nothing that has been said that refutes anything I have said in this thread.

You can't stop it because you have to give it room to learn what you don't know. These intelligent systems will learn in two days what it would take humans 20-30 years to learn. How can you control that?

Yes, artificial intelligence will eventually take over unless a catastrophe happens. We can't control intelligence that will need to have a higher I.Q. than any human that has ever lived as it processes more and more data.

So again I ask:

HOW DO YOU CONTROL INTELLIGENCE THAT'S FREE TO LEARN FROM DATA THAT HUMANS CAN'T UNDERSTAND?
edit on 13-2-2017 by neoholographic because: (no reason given)



posted on Feb, 13 2017 @ 06:51 AM
link   
a reply to: neoholographic

Interesting topic !

Yes... algorithms do run the world. Why do I agree, because we humans do let computers crunch the numbers or the data for us and then we take those answers and make decisions with it. So there is the AI we currently are utilizing. Part algorithmic calculations being done by super computers and part being done by our brains decision making. We are already working together.

Someone once told me that we can only hold about 7 things in our short term memory at any given point in time. And wiki verifies this:

The most commonly cited capacity is The Magical Number Seven, Plus or Minus Two (which is frequently referred to as Miller's Law), despite the fact that Miller himself stated that the figure was intended as "little more than a joke" (Miller, 1989, page 401) and that Cowan (2001) provided evidence that a more realistic figure is 4±1 units. In contrast, long-term memory can hold an indefinite amount of information.

Short-term memory

But what about our long term memory ? It claims we can hold an indefinite amount of information into long term. So why aren't our brains already doing this, but instead we use an artificial source to do it. The reason is because one person does not get exposed to all of the data in their lifetime in order to catalog all the data into long term memory. How does information move from short term to long term ? It's a process. Do you know or understand why data gets moved over to long term memory ? Or does it simply fall away from you, what was briefly short term, but now you no longer need it so it goes bye bye. I have pointed out just 2 simple reasons why one humans brain even though it is capable of retaining massive amounts of information never reaches that same point that a super computer does.
1) Time / Exposure
2) The process of moving data from short to long term memory, how to determine which data moves over for long term storage

The super computers can do this because it gets loaded with all of the data (we can connect alot of computers and data all together) and it does not have the limit of retaining the data, it is already capable of doing this. It then begins crunching that data and returning answers to us that we then go and use to make decisions.

Think about all the computers doing little algorithms for us on a day to day basis that we use to function in our daily lives. Now combine them all together.

Have we not already in a sense let them take over ? Yea sorta we have...

Since I don't have enough time to truly crunch all the worlds data into my brain and spit out answers, how do I really know if the correct or right answer is being given to me ? I suppose I just have to rely on the algorithm program and that it is providing the correct answer so that I can make the best decision.

leolady



posted on Feb, 13 2017 @ 11:38 AM
link   
Had a longer post but it unfortunately got wiped out by a page refresh...


originally posted by: neoholographic
No, it's not a very simple algorithm. These are very complex algorithms that can not only play Atari games but beat Champions at Go and at Poker.

Again, the same algorithm learned how to play different games without instructions on how to play the game. This isn't the equivalent of a simple homework assignment and anyone who knew anything about these networks wouldn't say something so asinine.


Ok, I think you're misunderstanding here. These aren't general AI's. While game playing AI's all have similarities it wasn't the same AI that beat all of them. It's different algorithms that the programmers point at specific games. I'll go through them one by one.

First, for the Atari games, they're using genetic algorithms. As I said before, it's not actually all that impressive to make a game solver like this for early console games. Most of those games are pretty formulaic... move in a direction, stay alive, gain score. The part that's likely impressing you here, is that the games are instructed to look in memory for values that only increment until it's narrowed down to one value which you can call score. Once you've identified the score variable, you can start randomly taking actions until you find patterns that maximize it. While it's not exceptionally difficult to do that, it's time consuming computationally, so unless you want bragging rights, or are getting paid by the hour it's generally better to just identify the score memory location for the computer and save yourself a few hours. These game solvers have to learn how to play every game from scratch, they also have to learn to play every level from scratch. If a level is randomly generated for each trial this approach won't work. The most impressive of these games is Tetris in my opinion, but that's only because I like the way these solvers tend to stack bricks high as a score optimization.

We can even tie this into some stuff another poster talked about earlier. If you are playing a game which has a timer that increments rather than decrements, the computer is going to have a very difficult time differentiating score from time. You'll run into a local maximia/minima problem here (maximia in this case) because simply standing still and doing nothing is going to create an infinitely incrementing value. Unless it randomly generates a series of inputs that result in a situation where score exceeds the timer, it's not going to maximize score unless a programmer points the score out to it specifically. This doesn't come up very often, but it can come up.

Next is Go. This was accomplished using what's called a Hidden Markov Model. Basically, HMM's are giant trees with a probability for each direction on the branch that you follow. When this technique was used for Go, it was able to solve the game, but it was only able to do so for a board of size 5x5. HMM's get exponentially more complex as possible moves increase, and Go itself gets exponentially more complex as the board size (possible moves) increase. So given those constraints and the hardware the researchers had access to, they couldn't solve above a 5x5 board in an acceptable time limit. A faster computer could though. But to give an idea of how complex the game gets, there's not enough memory or computing power on earth currently to solve a 15x15 board with this technique. It wouldn't surprise me though to learn that someone has tried to simplify the game using quad/oct trees and compute collisions as a shortcut (it's the first thing that comes to mind for me). But if I've thought of it, people smarter than me have too, and they've obviously not had success with that approach.

Last is Poker. I read about it a little but it involves imperfect information which I'm not super familiar with. The techniques used fall under the family of reinforcement learning, which is the same branch the Atari games fall under. For the Poker AI they used a concept known as counterfactual regret. Regret in AI terms refers to the uncertainty of information and is used as something of a score keeper for each decision, in order to maximize results. These techniques have been around for about two decades now, the big change in the poker AI was that if something had a regret of 0 due to having never been tried, it would try it. This lead to more experiences and more efficient regret scoring.



They created one set of algorithms that learned how to play 49 different games without instructions on how to play the games. Anyone that doesn't understand the importance of this doesn't understand A.I.


Again, it's just a matter of finding the score, and then using genetics to develop higher and higher series of inputs. I think the more interesting part isn't that it was able to play 49 different games. It's that it was able to play 49 different games without making a single decision. Actual intelligence would involve being able to make connections between the games and apply lessons learned from one to playing another. That's not what they did. It starts from scratch, every game, every level. In fact if you were to even alter the starting point on most of them, you'll make them start over again.
edit on 13-2-2017 by Aazadan because: (no reason given)



posted on Feb, 13 2017 @ 12:48 PM
link   
a reply to: Aazadan

It's a good thing your post didn't refresh because this is a long winded post that refutes nothing as it pertains to this post and the connection of artificial intelligence and big data.

Like I said, it's not that simple and yes it's intelligence. Like I said you haven't refuted anything I've posted. Here's more about A.I. and Big Data.

Why Big Data and AI Need Each Other -- and You Need Them Both


If you want to stay competitive as data growth continues to skyrocket, you’re going to have to do much more to get the maximum value from the customer data you’re collecting.

And to do it, you’re going to need artificial intelligence - AI.

There’s so much data being created — 44 zettabytes by 2020, according to IDC. The teams of data analysts that companies rely on today to uncover meaning simply can’t keep pace with the growth. In a prescient report issued several years ago, McKinsey Global Institute predicted a shortage of just this kind of talent by 2018.

What’s especially exciting about AI, compared to robots and other forms of advanced technology, is that it is by definition a kind of intelligence — and therefore not just a set of systems that react the way humans have programmed them to. AI can, as Wikipedia describes, actually perceive its environment and take actions.

Because AI is intelligence incarnate, it’s capable of what researchers call “deep learning.” Instead of telling machines what to do, we let them figure it out for themselves based on the data we give them. And ultimately, they tell us what to do.

That’s what’s got Stephen Hawking and Elon Musk worried — the idea that AI could lead to thinking machines that will eventually surpass humans, take over the world, and threaten our very existence as know it.


www.forbes.com...

The fact that you haven't mentioned big data in any of your posts shows you don't understand research in this area. These systems are intelligent. They have to be if you understand big data. These systems has to do exactly what intelligence does. It has to learn based on data.

We can't program it in this area because we don't know what it will learn because we don't understand all of the data. Without these intelligent systems big data wouldn't make sense.

Here's a paper titled:

ARTIFICIAL INTELLIGENCE AND BIG DATA


Artificial intelligence (AI) interests the study and development of intelligent machines and software. The related ICT research is highly technical and specialized, and its central problems include the development of software that can reason, gather knowledge, plan intelligently, learn, communicate, scent and Manipulate objects. It also allows users of big data to automate and enhance complex descriptive and predictive analytical tasks that, when performed by humans, would be extremely labor acute and time consuming. Thus, unleashing AI on big data can have a significant impact on the role data plays in deciding how we work, how we travel and how we can conduct business.

Delivering associative business intelligence that empowers business users by driving innovative decision-making - QlikView works the way the mind works. QlikView is a leading business discovery platform that enables users to explore big data and uncover insights that enable them to solve business problems in new ways. With QlikView, users can interact with data associatively, which allows them to gain unexpected business insights and make discoveries like with no other platform on the market.


www.ijrdo.org...&-Development-Organisation-pdf/International-Journal-Of-Computer-Science-Engineering/Journal-Of -Computer-Science-Engg-April-15/April-Cse-4.pdf

Again I ask:

HOW DO YOU CONTROL INTELLIGENCE THAT'S FREE TO LEARN FROM DATA THAT HUMANS CAN'T UNDERSTAND?

This is what you can't grasp because you haven't done the research. These systems have to be intelligent or they couldn't look through this big data and give us any insights that are shaping industries.

Again, humans don't understand this data. We have created an intelligence that can understand this data and has to have the freedom to learn things that humans can't understand.
edit on 13-2-2017 by neoholographic because: (no reason given)



posted on Feb, 13 2017 @ 07:42 PM
link   

originally posted by: neoholographic
It's a good thing your post didn't refresh because this is a long winded post that refutes nothing as it pertains to this post and the connection of artificial intelligence and big data.


That's because there isn't a significant connection. Yes AI can use big data but it's not some super special relationship. AI takes inputs and processes them, big data is all about generating high volumes of data, storing it, and retrieving it in an efficient manner.

Most of what is done with big data doesn't require AI. It's simply comparing different attributes and looking for correlations.



The fact that you haven't mentioned big data in any of your posts shows you don't understand research in this area. These systems are intelligent. They have to be if you understand big data. These systems has to do exactly what intelligence does. It has to learn based on data.


Again, just because we call something intelligent doesn't mean it is.

Additionally, AI doesn't work the way you think it does. The reason I went into a few more details about how various AI's solve problems in previous posts is that I'm trying to point out that while AI is a problem solving technique, computers aren't smart enough to know what technique to use in what circumstance. For that matter, they don't even know what the problem is. It takes people managing the software (usually programmers) to quantify a problem in a way that works with an algorithm, then it also requires those same people to tell the computer what algorithm to use.

Computers cannot take a random blob of data and assign meaning to it. Without additional information a computer cannot make sense of anything. For example, under an ASCII standard 65 and A are the same thing. They're both represented by the byte 01000001. Many programming languages such as C and C++ actually rely pretty heavily on this, for example I can convert A to lower case by adding the value 32 to it which is basically just flipping a single bit to 01100001. Another example of how this is used, is that I can represent blocks of data with numbers. If you go back a couple posts, I mentioned bit strings for genetics. Rather than generate random sequences of bits like 10001110 I can instead generate random numbers, 142 in this example. Then I can treat that number itself as a bitstring. All data is like this, but any given algorithm is only going to be able to parse it and do something if it's given in a certain format.

To give a less technical example of this, PNG, BMP, and JPG are all image formats. They all contain an array of pixels which represent an image. But you're not going to get the right output out of it unless you include with your file, some additional information about the algorithm it needs to use to display it (the file extension). You can open a BMP with anything unless the software vendor has said no to prevent user error. The computer if it opens it is going to read it (and in programming languages, opening files and reading them is trivial... there are no user protections). If you don't believe me, go try it with Notepad. To the computer, opening that BMP in Notepad and Paint are the same thing, it's reading and displaying the same data. The algorithm is different though, and in the absence of additional information has no idea of knowing which is correct.

AI is like this. It takes an outside factor to identify the problem and the algorithm. From there, the computer can quickly make calculations, but at no time is it actually in control of the process or making decisions.



HOW DO YOU CONTROL INTELLIGENCE THAT'S FREE TO LEARN FROM DATA THAT HUMANS CAN'T UNDERSTAND?


What makes you think humans can't understand it? Humans have to understand it, otherwise they wouldn't be able to structure the problem in such a way that an AI can be directed to solve it.



posted on Feb, 13 2017 @ 08:12 PM
link   
a reply to: neoholographic

Can you define intelligent algorithm for me?

Thanks.



posted on Feb, 13 2017 @ 09:05 PM
link   
a reply to: Aazadan

You said:

Most of what is done with big data doesn't require AI. It's simply comparing different attributes and looking for correlations.

You don't understand what your talking about. Yes, big data requires A.I. because it has grown so much. Humans can't make sense of all the data so they need these systems to make correlations in the data that humans couldn't make. It would take humans 30-40 years and in some cases a lifetime in order to make sense of these massive amounts of data.

You also said this and it shows me you're just making it up as you go and you don't understand these things.

What makes you think humans can't understand it? Humans have to understand it, otherwise they wouldn't be able to structure the problem in such a way that an AI can be directed to solve it.

Humans can't understand it and that's why we need A.I. to make sense of the data. There not directing A.I. to solve anything and this again shows your lack of understanding.

In the Atari games, they didn't direct the algorithm to solve anything. The system had to learn how to play. It had no instructions and it didn't even know what a ball was.



This is the whole point of deep learning. It's to learn from the data and give humans insights that they can't understand without these systems because there's too much data.

This is why Scientist want to use these systems in order to explore data that they can't understand.


I work in computational quantum condensed-matter physics: the study of matter, materials, and artificial quantum systems. Complex problems are our thing.

Researchers in our field are working on hyper-powerful batteries, perfectly efficient power transmission, and ultra-strong materials—all important stuff to making the future a better place. To create these concepts, condensed-matter physics deals with the most complex concept in nature: the quantum wavefunction of a many-particle system. Think of the most complex thing you know, and this blows it out of the water: A computer that models the electron wavefunction of a nanometer-size chunk of dust would require a hard drive containing more magnetic bits than there are atoms in the universe.

One small breakthrough in condensed-matter physics could change everything. Complexity, and the challenge of tackling complex problems with existing technology, is what keeps me up at night. The most complex problem is understanding the wavefunction of a many-particle quantum system with sufficient accuracy to design new quantum materials and devices. When DeepMind beat Sedol, I began to wonder: Could machine learning help us solve the most complex problem in physics? The most complex problem in physics could be solved by machines with brains.


qz.com...

These things can't be solved by humans because there's too much data. The fact that you don't understand something so simple shows you haven't looked at this issue.

These intelligent systems aren't designed to solve specific problems, there just designed to learn. If we knew the outcome of what they will learn then we wouldn't need these systems but we do because of the growth of big data.

What is big data?

Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone. This data comes from everywhere: sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals to name a few. This data is big data.


www-01.ibm.com...

This is from years ago. These intelligent systems can learn in a few days what it would take humans 20-30 years to learn.

In the game of Go, the system did something called reinforced learning. It could play Go a million times in a day and learn from these games. A single human couldn't play a million games in a lifetime.

These systems learn and then gives us insights about the data that we can't understand.



posted on Feb, 13 2017 @ 10:19 PM
link   

originally posted by: soficrow
a reply to: neoholographic

Can you define intelligent algorithm for me?

Thanks.




An intelligent algorithm makes connections and learns insights from the data without being given a specific task. We can't give it a specific task because we don't even know what we're looking for. For instance, intelligent algorithms can give employers insights on the best candidates to hire by looking at a million data points and making connections that humans couldn't make.

These algoritms are the engines of artificial intelligence. Here's the definition of intelligence:

Intelligence

a (1) : the ability to learn or understand or to deal with new or trying situations : reason; also : the skilled use of reason (2) : the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (as tests)


This is exactly what intelligent systems do. They learn based on the information from their environment. That environment can be a game of Atari or a patient's medical records.

This is opposed to an algorithm that's designed to carry out a specific task. For instance, if you own a store and you want to know how an item sold over the last 10 years and which months did it sell best, you can design an algorithm to find that specific data amongs the data set of your past sales.



posted on Feb, 14 2017 @ 07:48 AM
link   
a reply to: leolady

Thanks and this is very interesting. Here's a recent article that shows what A.I. will be capable of.

Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations

Is this how Skynet starts?



In tests late last year, Google's DeepMind AI system demonstrated an ability to learn independently from its own memory, and beat the world's best Go players at their own game.

It's since been figuring out how to seamlessly mimic a human voice.

Now, researchers have been testing its willingness to cooperate with others, and have revealed that when DeepMind feels like it's about to lose, it opts for "highly aggressive" strategies to ensure that it comes out on top.

The Google team ran 40 million turns of a simple 'fruit gathering' computer game that asks two DeepMind 'agents' to compete against each other to gather as many virtual apples as they could.

They found that things went smoothly so long as there were enough apples to go around, but as soon as the apples began to dwindle, the two agents turned aggressive, using laser beams to knock each other out of the game to steal all the apples.


www.sciencealert.com...

Interesting stuff and here's a video:



What makes this even more interesting is, when more complex and intelligent systems were introduced, they became more aggressive in order to reach their objective.


If the agents left the laser beams unused, they could theoretically end up with equal shares of apples, which is what the 'less intelligent' iterations of DeepMind opted to do.

It was only when the Google team tested more and more complex forms of DeepMind that sabotage, greed, and aggression set in.

As Rhett Jones reports for Gizmodo, when the researchers used smaller DeepMind networks as the agents, there was a greater likelihood for peaceful co-existence.

But when they used larger, more complex networks as the agents, the AI was far more willing to sabotage its opponent early to get the lion's share of virtual apples.


www.sciencealert.com...

This is a HUGE RED FLAG because the more intelligent and complex the system is, the more aggressive it gets. Here's a hypothetical:

Say you give A.I. a terrorist target and it learns that the best way to get it's target to come out from hiding is to kill their family. The danger here is that these systems have the freedom to learn so you can't control how or what it learns.

This is why what's called dumb A.I. can be so dangerous. You can truly have a terminator like situation. So you have this super intelligence that doesn't have a conscious. So if it concludes killing a billion people will help it reach it's goal, then it has the intelligence to reach it's goal but it doesn't have the consciousness to say killing a billion people is wrong.



posted on Feb, 14 2017 @ 09:21 AM
link   

originally posted by: neoholographic
Humans can't understand it and that's why we need A.I. to make sense of the data. There not directing A.I. to solve anything and this again shows your lack of understanding.

In the Atari games, they didn't direct the algorithm to solve anything. The system had to learn how to play. It had no instructions and it didn't even know what a ball was.


And this is an example of how you don't understand how these things work. I've taken multiple high level college classes on the subject, read a few books on programming/implementing AI, read many publications on it, and have built one that I've gradually been improving for a year. Several of my colleagues are AI devs, and I talk to them. I'm nowhere near an expert on the subject, but I do seem to know more than you considering you haven't presented even a single qualification other than saying people need to go read about it, and then linking popsci articles which are neither research or experience. And having neither is fine, but when you don't have them, if you're actually interested in learning, you should listen to the people who do.

Just because you don't tell the computer the rules of the game, doesn't mean you're not structuring the problem for them. For example, in most of these games the computer is being rated based on score, however the computer doesn't know what score is. From the programs perspective it's a value in memory that changes but doesn't actually change any of the inputs it's given. You have to define for the computer that score is the objective. Furthermore, in some cases you have to define for the computer what memory address the score is in, in others you can let it find it, though this takes more trials.

Another way you have to structure the problem, is that you need to keep the environment static. What the AI ends up doing is it figures out an optimal sequence of inputs for that gameworld. Introducing a random component to the level makes it very hard for the AI to play, because it has to generate a new series of optimal inputs. As a result, when playing early console games what researchers will typically do is load the game, but modify it so that it never generates a new random seed on reload. That way you get the same behavior pattern over and over.


In the game of Go, the system did something called reinforced learning. It could play Go a million times in a day and learn from these games. A single human couldn't play a million games in a lifetime.


Yet, a single human will outperform a computer in Go on any board of a reasonable size. Go is played using Markov Models, which are essentially trees of input, and for each input, it plays out the entire game from that point to determine the optimal move. These things get very long, and very complex. They are also very slow, that's why the Go playing computer was limited to a 5x5 board. These techniques freeze up and don't work on bigger board states. Humans can play larger boards with no problems, machines cannot. Eventually hardware/software will likely find a way to play a bigger board but every step up becomes exponentially more difficult.
edit on 14-2-2017 by Aazadan because: (no reason given)



posted on Feb, 14 2017 @ 04:19 PM
link   
a reply to: Aazadan

You haven't presented a shred of evidence that refutes anything that has been said. This one line let's me know you don't know what you're talking about. You said:

Just because you don't tell the computer the rules of the game, doesn't mean you're not structuring the problem for them.

This makes no sense. You have to structure the problem for humans when they play games and most of the times humans have instructions on how to play the game.

The other day, I went to my Brother's house and my nephew had bought a game from Game Stop for the PS4. You know what he was doing? READING INSTRUCTIONS. This is why I don't think you understand A.I. at all. Humans have to have the problem structured for them in most cases.

In this case though, the A.I. went even further. THE A.I. HAD NO INSTRUCTIONS LIKE MY NEPHEW.

You had one set of algorithms LEARNING how to play 49 different games. Exactly what humans do. It would be like my Nephew buying 49 games from Game Stop but he wasn't able to read any instructions. He would have to do the same thing that A.I. did. He would have to learn how to play the games through trial and error.

When AlphaGo won in the game of Go it was HUGE. In this area, it's obvious you don't understand what took place. The reason Elon Musk and and the C.E.O. of DeepMind were so excited about this is because they thought these milestones were 5 to 10 years away.

Elon Musk Says Google Deepmind's Go Victory Is a 10-Year Jump For A.I.

“Experts in the field thought A.I. was 10 years away from achieving this,” Musk says.


www.inverse.com...

Here's the tweet from DeepMind C.E.O.

#AlphaGo WINS!!!! We landed it on the moon. So proud of the team!! Respect to the amazing Lee Sedol too

The fact that you try to act like this is something that's just so simple shows your ignorance in this area. If it was so simple, why aren't you creating an A.I. company and selling it for $500 million?

The reason AlphaGo was seen as such a milestone is because it did something very important. It made itself better without human intervention.

First, the system learned how to mimic human players in Go. Then it did something quite remarkable. It played against itself 13 million times and it got better each time. It learn how to get better by beating older versions of itself.

This was a HUGE breakthrough in this area.

Here's a recent article on how AlphaGo is being used to crack some of the biggest mysteries in Physics. The fact you can't comprehend what this means shows you need to do more research.

AI learns to solve quantum state of many particles at once


The same type of artificial intelligence that mastered the ancient game of Go could help wrestle with the amazing complexity of quantum systems containing billions of particles.

Google’s AlphaGo artificial neural network made headlines last year when it bested a world champion at Go. After marvelling at this feat, Giuseppe Carleo of ETH Zurich in Switzerland thought it might be possible to build a similar machine-learning tool to crack one of the knottiest problems in quantum physics.

Now, he has built just such a neural network – which could turn out to be a game changer in understanding quantum systems.

Go is far more complex than chess, in that the number of possible positions on a Go board could exceed the number of atoms in the universe. That’s why an approach based on brute-force calculation, while effective for chess, just doesn’t work for Go.

In that sense, Go resembles a classic problem in quantum physics: how to describe a quantum system that consists of many billions of atoms, all of which interact with each other according to complicated equations.

“It’s like having a machine learning how to crack quantum mechanics, all by itself,” Carleo says. “I like saying that we have a machine dreaming of Schrödinger’s cat.”


Link AI







 
17
<< 1  2    4  5 >>

log in

join