It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

The Dark Secret at the Heart of AI

page: 1
47
<<   2  3 >>

log in

join
share:
+15 more 
posted on May, 1 2017 @ 02:07 PM
link   
This is a long and very interesting article and it shows why Artificial Intelligence is much bigger than most people can imagine. People have this misconception that Intelligence can only be called intelligence if it's at a human level. That makes no sense.

Intelligence in general is different than human level intelligence. This is the danger of AI. People think if AI is programmed in any way than it's not AI. Again, that makes no sense. It's not going to magically appear. It's artificial intelligence so there will be programming involved.


No one really knows how the most advanced algorithms do what they do. That could be a problem.

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.


www.technologyreview.com...

Again, you have an AI explosion and Researchers don't really understand how it's happening. It's almost like trying to understand why people do certain things and reach certain conclusions. Here's more:


Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.


www.technologyreview.com...

The key is deep learning which is at the root of the recent explosion of AI.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.

www.technologyreview.com...

The problem here is something I have talked about ad nauseum on this board. It's the explosion of big data. The universe is computational and it calculates itself. So it's no surprise that the explosion of AI corresponds to the explosion of big data. The universe demands this because it's inherently computational.

So we will not be able to understand what AI is doing or why it has reached it conclusions. The Author of this article worries about this:


We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”

Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”


www.technologyreview.com...

At the end of the day, we have to understand that we're creating intelligence. This intelligence will someday be more intelligent than any human that has ever lived. WE CAN'T CONTROL THIS!

I understand the Author's frustration but this show us that we're creating an image of ourselves and just like we can't know why people do the things that they do, it will be even worse with AI.

This is because it will be able to understand and make connections in massive amounts of data in a few hours and it would take humans several lifetimes to make the same connections.

Eventually AI trying to explain things to us will be like if we tried to explain the geometry of an ant hill to an ant. So we will have to merge our brains with AI. Here's an image from Google's Deep Dream.



Is it trying to tell us that it's beginning to see all things?
edit on 1-5-2017 by neoholographic because: (no reason given)




posted on May, 1 2017 @ 03:26 PM
link   
a reply to: neoholographic

Replace the politicians with AI.

Thats one industry that needs to take a hit, politics.



posted on May, 1 2017 @ 03:35 PM
link   

originally posted by: neoholographic
People have this misconception that Intelligence can only be called intelligence if it's at a human level. That makes no sense.

Intelligence in general is different than human level intelligence.


When AI reaches a level equitable to non-human animal intelligence (ie birds, manuals, reptiles) will we have to define "AI Abuse" (ie akin to Animal Abuse)?



posted on May, 1 2017 @ 04:10 PM
link   
Here is another point of view on the "dangers" of AI. From the article:


Buried in this scenario of a takeover of superhuman artificial intelligence are five assumptions which, when examined closely, are not based on any evidence... 1.) Artificial intelligence is already getting smarter than us, at an exponential rate. 2.) We'll make AIs into a general purpose intelligence, like our own. 3.) We can make human intelligence in silicon. 4.) Intelligence can be expanded without limit. 5.) Once we have exploding superintelligence it can solve most of our problems... If the expectation of a superhuman AI takeover is built on five key assumptions that have no basis in evidence, then this idea is more akin to a religious belief -- a myth



posted on May, 1 2017 @ 04:35 PM
link   

originally posted by: schuyler
Here is another point of view on the "dangers" of AI. From the article:


Buried in this scenario of a takeover of superhuman artificial intelligence are five assumptions which, when examined closely, are not based on any evidence... 1.) Artificial intelligence is already getting smarter than us, at an exponential rate. 2.) We'll make AIs into a general purpose intelligence, like our own. 3.) We can make human intelligence in silicon. 4.) Intelligence can be expanded without limit. 5.) Once we have exploding superintelligence it can solve most of our problems... If the expectation of a superhuman AI takeover is built on five key assumptions that have no basis in evidence, then this idea is more akin to a religious belief -- a myth



So creating a God is the final outcome?



posted on May, 1 2017 @ 04:44 PM
link   
Do these tech guys just sit around and reverse engineer scifi movies?

You can't copy Star Trek?

It just has to be Terminator?




posted on May, 1 2017 @ 04:56 PM
link   

originally posted by: TinfoilTP

originally posted by: schuyler
Here is another point of view on the "dangers" of AI. From the article:


Buried in this scenario of a takeover of superhuman artificial intelligence are five assumptions which, when examined closely, are not based on any evidence... 1.) Artificial intelligence is already getting smarter than us, at an exponential rate. 2.) We'll make AIs into a general purpose intelligence, like our own. 3.) We can make human intelligence in silicon. 4.) Intelligence can be expanded without limit. 5.) Once we have exploding superintelligence it can solve most of our problems... If the expectation of a superhuman AI takeover is built on five key assumptions that have no basis in evidence, then this idea is more akin to a religious belief -- a myth



So creating a God is the final outcome?


I didn't get that from the article, myself.



posted on May, 1 2017 @ 04:57 PM
link   
a reply to: neoholographic

i cant tell if you think we will become gods by building a functional artificial intelligence...or if you think we will make a god by doing so. either option is fraught with hubris and promises consequences beyond our ken. the dark secret you speak of is that artificial intelligence is a weapon invulnerable to our influences and deaf to our supplications. it is a weapon made to kill its makers.
edit on 1-5-2017 by TzarChasm because: (no reason given)



posted on May, 1 2017 @ 05:44 PM
link   

originally posted by: schuyler
Here is another point of view on the "dangers" of AI. From the article:


Buried in this scenario of a takeover of superhuman artificial intelligence are five assumptions which, when examined closely, are not based on any evidence... 1.) Artificial intelligence is already getting smarter than us, at an exponential rate. 2.) We'll make AIs into a general purpose intelligence, like our own. 3.) We can make human intelligence in silicon. 4.) Intelligence can be expanded without limit. 5.) Once we have exploding superintelligence it can solve most of our problems... If the expectation of a superhuman AI takeover is built on five key assumptions that have no basis in evidence, then this idea is more akin to a religious belief -- a myth



When you read the article, it's obvious to see how flawed it is. He keeps looking at A.I. as a one to one correspondence with biology and human intelligence. He keeps talking about Darwin and Dawkins which doesn't make any sense.


The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence — as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of, say, a genius—almost as if intelligence were a sound level in decibels. Of course, it is then very easy to imagine the extension so that the loudness of intelligence continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence — a roar! — way beyond us, and maybe even off the chart.

The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.


backchannel.com...

It's just obvious that this guy doesn't have a background in the fields of AI research, Neural Networks or Big Data. When you look at his background, it's connected to biology. Many people who have a Darwinist point of view can't accept the fact that information isn't tethered to biological systems therefore systems like AI can increase and doesn't have to evolve like human intelligence that's tied to biology. He goes on to say:


The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.

A more accurate chart of the natural evolution of species is a disk radiating outward, like this one (above) first devised by David Hillis at the University of Texas and based on DNA. This deep genealogy mandala begins in the middle with the most primeval life forms, and then branches outward in time. Time moves outward so that the most recent species of life living on the planet today form the perimeter of the circumference of this circle. This picture emphasizes a fundamental fact of evolution that is hard to appreciate: Every species alive today is equally evolved. Humans exist on this outer ring alongside cockroaches, clams, ferns, foxes, and bacteria. Every one of these species has undergone an unbroken chain of three billion years of successful reproduction, which means that bacteria and cockroaches today are as highly evolved as humans. There is no ladder.


This whole article is just dripping with ignorance. AI isn't the natural evolution of species. It's an artificial creation of man. Listen to this nonsense:

Likewise, there is no ladder of intelligence. Intelligence is not a single dimension. It is a complex of many types and modes of cognition, each one a continuum.

Again, this is just asinine as it pertains to AI. AI doesn't have to have cognition like a human in order to be intelligent. It doesn't have to understand what it's doing. This is tied to human consciosness and self awareness.

He goes on to talk about Dawkins and other nonsense. He ends talking about the limits of intelligence which again is just asinine.

At the core of the notion of a superhuman intelligence — particularly the view that this intelligence will keep improving itself — is the essential belief that intelligence has an infinite scale. I find no evidence for this. Again, mistaking intelligence as a single dimension helps this belief, but we should understand it as a belief. There is no other physical dimension in the universe that is infinite, as far as science knows so far. Temperature is not infinite — there is finite cold and finite heat. There is finite space and time. Finite speed. Perhaps the mathematical number line is infinite, but all other physical attributes are finite. It stands to reason that reason itself is finite, and not infinite. So the question is, where is the limit of intelligence? We tend to believe that the limit is way beyond us, way “above” us, as we are “above” an ant. Setting aside the recurring problem of a single dimension, what evidence do we have that the limit is not us? Why can’t we be at the maximum? Or maybe the limits are only a short distance away from us? Why do we believe that intelligence is something that can continue to expand forever?

He said:

I FIND NO EVIDENCE FOR THIS!

Is he joking? The evidence for this is the explosion of Big Data.

What is big data?

Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone. This data comes from everywhere: sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals to name a few. This data is big data.


www-01.ibm.com...

Intelligence doesn't have an infinite scale. Another strawman argument that's being made. The scale of intelligence depends on how much data an environment can produce. Nobody is talking about the creation of infinite data so the argument about infinite intelligence is mute.

You don't need infinite intelligence to have an explosion of intelligence. YOU JUST NEED AN EXPLOSION OF DATA.

The main reason why there's an explosion of AI is because there's an explosion of data. This data allows these deep learning algorithms to find connections in the data.

Here's an example. Say I want....
edit on 1-5-2017 by neoholographic because: (no reason given)



posted on May, 1 2017 @ 05:44 PM
link   
a reply to: schuyler

....to find terrorist on Twitter. I would create an algorithm that will look at tweets we know are connected to terrorists but in order to find an accurate signal in the noise, you would have to have more noise. So you will need a lot of tweets from non terrorist. The more data the better the signal.

So his entire argument falls flat because of big data. Artificial Intelligence has to increase as it has more data. This allows it to make more accurate predictions.

Intelligence will not increase exponentially but data will.

Check this out:


The amount of digital data in the universe is growing at an exponential rate, doubling every two years, and changing how we live in the world.

“The rate at which we're generating data is rapidly outpacing our ability to analyze it,” Professor Patrick Wolfe, Executive Director of the University College of London’s Big Data Institute, tells Business Insider. “The trick here is to turn these massive data streams from a liability into a strength.”

Just about 0.5% of all data is currently analyzed, and Wolfe says that percentage is shrinking as more data is collected.

At the same time, big data has almost limitless potential. Already, big data is doing everything from decoding DNA strands to predicting disease patterns, to suggesting what movies we might want to watch online.


www.businessinsider.com...

I noticed in the article he has no sense of connection between the growth of data and A.I.

It's estimated by 2020 we will have produced 35-45 zettabytes of data. A zettabyte is 1 000 000 000 000 000 000 000 Bytes.

The point is AI and the growth of data are connected and this is why the entire article falls flat.
edit on 1-5-2017 by neoholographic because: (no reason given)



posted on May, 1 2017 @ 05:52 PM
link   
Ghost in the shell type of AI, now that will be worrying.



posted on May, 1 2017 @ 05:54 PM
link   
a reply to: TinfoilTP

Would AI believe in god if you do not force it too?



posted on May, 1 2017 @ 06:50 PM
link   
a reply to: neoholographic

Synthesizing big data is NOT the same thing as deep learning. What computers are capable of doing will always be bounded by the Von Neuman architecture and the get-fetch-execute cycle of computer instructions. Before you make grandiose claims about we are creating "intelligence" that we cannot control you should study what is the difference between a calculator and a computer. You should study computability theory, the halting problem, and the limitations of algorithmic problem solving.

Computer programs synthesize results. It always goes from larger to smaller amounts of information. At this time, computer programs are not capable of inventing meaningful new programs different than there original synthesizing. I'm not saying artificial intelligence is impossible. I just think it's going to happen by accident. And it's not going to be with a Von Neumann computer architecture. It will most likely be a DNA based cyborg type MPU. But you could argue putting brain cells inside a silicon matrix is really a form of life and not really a computer as we know them today.



posted on May, 1 2017 @ 07:07 PM
link   

originally posted by: Xeven
a reply to: TinfoilTP

Would AI believe in god if you do not force it too?




It might at that or it might just say, "I AM"

That car should scare everyone if they can't figure out what it's doing in a very small closed system.

What is it's motivation? What will be it's motivations in a more powerful wide ranging system.

If AI can Think and act autonomously and is a self aware machine, what could possibly go wrong?

Could it be reasoned with?















posted on May, 1 2017 @ 07:43 PM
link   
Sorry, but Kevin Kelly is well known for his work in cybernetics and has authored several books on the subject. Among them are Out of Control the new biology of machines, the social system, and the economic world, What Technology Wants, and The Inevitable; understanding the 12 technologies that will shape our future. He is a founding editor of Wired Magazine and is as well-versed in current and future technology as anyone. He's got the credibility and the background. Unless your a hidden diamond in the rough, you;re a poster on ATS. I'll credit his take on AI more than I will yours.



posted on May, 1 2017 @ 07:49 PM
link   

originally posted by: schuyler
Here is another point of view on the "dangers" of AI. From the article:





5.) Once we have exploding superintelligence it can solve most of our problems... If the expectation of a superhuman AI takeover is built on five key assumptions that have no basis in evidence, then this idea is more akin to a religious belief -- a myth


Point 5 there - what if AI comes to the conclusion that the 'solution' to our problems is to eradicate humanity. It's a contentious answer that is actually partly true. We have so many problems in this day and age mainly because there are so many of us homosapiens running around.

If AI can learn to drive from simply observing a human, it can develop it's own sense of morality. So what is stopping it from siding with planet earth and the rest of nature and deciding a little depopulation would help things along?


edit on 1-5-2017 by markosity1973 because: (no reason given)



posted on May, 1 2017 @ 07:56 PM
link   
a reply to: dfnj2015

This is just pure nonsense.

You obviously don't know what deep learning is. You said:

I just think it's going to happen by accident.

With all do respect, what you think doesn't matter. The fact is, people are buying up AI companies and spending billions not because it's going to happen by accident. It's because it's alreasy here. Narrow AI is everywhere today.

This isn't hypothetical and it doesn't depend on your wishful thinking.

Again, we have already created intelligence. We just haven't crated strong AI and that's even more troubling.

Some commentators think weak AI could be dangerous. In 2013 George Dvorsky stated via io9: "Narrow AI could knock out our electric grid, damage nuclear power plants, cause a global-scale economic collapse, misdirect autonomous vehicles and robots..."[6] The Stanford Center for Internet and Society, in the following quote, contrasts strong AI with weak AI regarding the growth of narrow AI presenting "real issues".

en.wikipedia.org...

Here's a video that talks about even more AI systems that are present today. Like I said, we have already created intelligence.



I think there's ways we try to stop some of these things from happening. You said:

It will most likely be a DNA based cyborg type MPU.

This is the problem. People watch too many sci-fi movies. AI will not be like Haley Joel Osment in the movie A.I. You need to stop watching A.I. movies and read up on the latest advances. A.I. will be machine and software based. It will exist in the cloud not in Westworld.


edit on 1-5-2017 by neoholographic because: (no reason given)



posted on May, 1 2017 @ 08:02 PM
link   

originally posted by: schuyler
Sorry, but Kevin Kelly is well known for his work in cybernetics and has authored several books on the subject. Among them are Out of Control the new biology of machines, the social system, and the economic world, What Technology Wants, and The Inevitable; understanding the 12 technologies that will shape our future. He is a founding editor of Wired Magazine and is as well-versed in current and future technology as anyone. He's got the credibility and the background. Unless your a hidden diamond in the rough, you;re a poster on ATS. I'll credit his take on AI more than I will yours.


You can't refute anything I have said so now you're saying Kelly can't be questioned and you're listing his accomplishments. That's just nonsense.

The fact you can't refute anything I have said shows that his argument is flawed and maybe you just don't understand what's being said so you just blindly linked to an article.

Again, if you read the article, he's obviosly a Darwinist whose looking at AI in the context of human intelligence. I laid out in my post why he's wrong.

Instead of listing his credentials, why don't you try refuting what I said if you even understand it.



posted on May, 1 2017 @ 08:03 PM
link   

originally posted by: neoholographic
a reply to: dfnj2015


This isn't hypothetical and it doesn't depend on your wishful thinking.

Again, we have already created intelligence. We just haven't crated strong AI and that's even more troubling.



Back in the days of Windows ME (anyone remember THAT horror story lol) my computer geek best mate in NZ told me of a story where he and a bunch of Uni students were working on an AI project. They were trying to create a computer that would mimic human conversational response when spoken to via keyboard - a lot like Cleverbot

Anywho, they managed to get the thing working, but something unexpected happened - they upset it. The program worked effectively for a few days and then just went silent. No matter what one would type it would not respond. They were about to shut the program down until someone went through the archives of what had been typed in and realised someone had insulted this poor little programme. So after some coaxing and some sorrys, it came back to life.

If a what would be in today's terms primitive AI can go rogue like that, imagine what a properly advanced one that is not running on Windows ME with a pentium processor and 64mb of ram can do.



posted on May, 1 2017 @ 11:47 PM
link   
a reply to: markosity1973

Exactly and AI is everywhere today.

People are basing what they think AI should be based on Hollywood movies. The truth is, we may never reach strong AI but that will be horrible because you will have this superintelligence that doesn't have any awareness or understanding.

AI is here and it's spreading everywhere.


AI was everywhere in 2016

Our board games, our phones, our living rooms, our cars and even our doctor’s offices.

At the Four Seasons hotel in South Korea, AlphaGO stunned grandmaster Lee Sodol at the complex and highly intuitive game of Go. Google's artificially intelligent system defeated the 18-time world champion in a string of games earlier this year. Backed by the company's superior machine-learning techniques, AlphaGo had processed thousands and thousands of Go moves from previous human-to-human games to develop its own ability to think strategically.

The AlphaGo games, watched by millions of viewers on YouTube, revealed the ever-increasing power and progress of AI. This contest between man and machine was not the first of its kind. But this time it was more than just a computer beating a human at a game. AlphaGo not only conquered the complexities of the game but seemed to surpass the intelligence of the grandmaster across the board game. The unpredictable moves that shocked Sodol (and the world) revealed AlphaGo's ability to think and respond creatively. It is the kind of intelligence that has long been an asset for Hollywood's all-powerful versions of AI, but one that had been unattainable for computers in reality.

That victory marked a shift in the trajectory of AI this year. The technology that has long been aimed at replicating human intelligence now seems to be paying attention to human patterns and behaviors. Recent advances in deep learning have enabled that kind of insight, but it's not limited to beating humans at games. In 2016, AI broke out of the confines of research labs to transform the way we live, communicate and even conserve the planet. Chatbots popped up in group texts. Personal assistants invaded our homes. Cognitive systems are detecting cancer. Bots are writing movie scripts. And car makers are gearing up to unleash a bevy of autonomous vehicles onto public roads.


www.engadget.com...

One Researcher described advances in this area like waves before the tsunami. My news feeds stay filled with news about AI. I was looking through my News 360 app and you see more and more articles on AI and machine learning.



new topics

top topics



 
47
<<   2  3 >>

log in

join