It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Elon Musk worries Skynet is only five years off

page: 3
28
<< 1  2   >>

log in

join
share:

posted on Nov, 23 2014 @ 09:42 AM
link   
a reply to: lostbook

The most dangerous thing about AI to me, is that it will allow us to advance technologically centuries, in literally SECONDS, as sentient AI would be able to think about a billion times faster than us, and access the entire world database of knowledge to do it. Example: we build an intelligent sentient supercomputer, and ask it, "How do we build a workable time machine". Poof, a couple seconds go by and out pops the answer. Or some psychopath with access to a bio lab gets ahold of one (you know a few months after we have AI, you will be able to hold an incredibly powerful computer in your hand), and they ask it how to make a completely lethal virus, again, poof, out pops an answer and a few months later the human race is extinct. Or a bunch of investors get this technology, manipulate the stock markets and cause a huge train wreck as a result. Its like harnessing the thinking power of a god and being able to wield it, of course without having the wisdom to use that power. Very scary.
edit on 23-11-2014 by openminded2011 because: (no reason given)



posted on Nov, 23 2014 @ 10:30 AM
link   
I think it has been a progression even in the days of analogue and radio waves. We are an emotional species, with a range of emotion that the AI or entity behind may not have. Emotions are a frequency that is ultimately a part of the bigger frequency in which may make up reality. A mass hypnosis in the beginning and ongoing, maybe isolated hypnosis via radio waves turned into broadcast and an unseen tweek of emotion beginning with placing old Aerials on every ones rooftops, whilst the next stage were to encompass orbit with satelites which would aid this process and be part of the next frequency of GHZ which most devices all around us now can recieve. I think it will make a move no later than the year 2022, in which people will realise we are part of something which cannot be shut down easily. Under the hypnosis of the past where it remained invisble and the present offerings it may promise, people may then welcome the next stage of internal hypnosis in the form of chips, which will be controlled by the array of satelites, which I believe could be moniterd from a further deep space frequency and may have the range to focus on future hardware and computing that will process things on the deeper space frequency and not ghz. It would likely be controlling the actions of the elite or possibly the elite knew its purpose years ago in the first stages of its building, so in a sense if it were here before the technology we have today, is its own persuasive creation, so it may not shut down or be able to use say nuclear weapons as a samson option as a bluff. When a super intelligence is known to the world it may even have many people on the ground in the form of a religion believing that an almighty has appeared so it would have an army. Maybe it originally came from inside the Earth and its goal is to leave the Earth where conciousness could be sent with dna to be reconstructed on another rock. Thats if say, underground there were beings that have or had dna and require a body suit in the future. I belive have seen the AI for years and I do believe from a reasonable questionable source that something will be happening in 2022.



posted on Nov, 23 2014 @ 11:28 AM
link   
It would be fair to say that Siri, the computer that played Jeapardy, and the Mars rover Curiosity are already AIs. I think in the future, as in the past, computers will advance gradually, but rapidly, so it's all a matter of where you draw the line, if there really is one. We have nuclear energy and genetic engineering, two other very dangerous technologies. It's all about responsible control, enforced in a global scale.



posted on Nov, 23 2014 @ 05:26 PM
link   
Yes, the performance of genetic algorithms and some artificial neural networks are surprising to many people.

But that is still very far from general artificial intelligence. You should think of it the way you compare a rock boring machine to a man with a pickaxe.

Everything necessary to make those advances were invented by humans. General human-level AI is when they could think of them on their own and converse capably with humans about such matters.

Of the optimists here, who works professionally in machine learning? I do, to some extent.

I still call general human-level AI as 200 years away.

Mass Effect is actually pretty good for sci-fi. There, general 'cyber-intelligences' (very good versions of Siri) were common like phones. True AIs, rare.

edit on 23-11-2014 by mbkennel because: (no reason given)



posted on Nov, 23 2014 @ 06:30 PM
link   
a reply to: lostbook

Good find and Musk is right to be worried. Also, it isn't just Musk saying these things.

This space is growing fast and people need to realize, machine intelligence doesn't have to be exactly like human intelligence in order for something like Skynet to occur.

This is what Musk is talking about. Everyones waiting for Haley Joel Osment from the movie A.I. but that isn't what will be needed.

If an A.I. can mimic self awareness how will you know unless it tells you?

So, since this space is advancing rapidly, it's important to talk about safeguards instead of spreading around A.I. willy nilly and saying we don't need to worry about anything for 100 years.

I think this is a recipe for disaster.



posted on Nov, 23 2014 @ 06:35 PM
link   
a reply to: mbkennel



I still call general human-level AI as 200 years away.


When mankind does reach that point in 200+ years, do you think Elon Musk's fears will then materialize?


dex



posted on Nov, 23 2014 @ 06:35 PM
link   
At this rate of decay, mankind does not have 5 years. Worry about NOW, not tomorrow..



posted on Nov, 23 2014 @ 06:39 PM
link   
a reply to: neoholographic



machine intelligence doesn't have to be exactly like human intelligence in order for something like Skynet to occur.


But what would drive the AI to create Skynet? Why would it feel the need to destroy humanity?


dex



posted on Nov, 23 2014 @ 06:49 PM
link   
a reply to: Bicent76

I think you are correct in that there are a lot more things to fear than AI. I don't necessarily agree with the 5 year limit, but I don't think we'll reach the 200 year estimate of mbkennel. At least not in our current civilization mode.

The way humans are destroying their environment, global nuclear proliferation, warfare, and the high probability of a devastating global pandemic, are a much more probable means of civilization's destruction.


dex



posted on Nov, 23 2014 @ 07:01 PM
link   

originally posted by: DexterRiley
a reply to: neoholographic



machine intelligence doesn't have to be exactly like human intelligence in order for something like Skynet to occur.


But what would drive the AI to create Skynet? Why would it feel the need to destroy humanity?


dex


Simply put, A. I. could fear humans will try to shut it down, it could see humans as a destructive force or it could get along with humans perfectly well.

The point Musk is making is that there's no need to be stupid when we can talk about these things now and maybe put some safeguards in place.

Why does everyone have to be fear mongering when they're simply saying we need to talk about these things because the space is advancing rapidly.

Like he said, he was an early investor in Deep Mind the company acquired by Google along with other companies. Google didn't buy Deep Mind for 400 million and other A.I. companies because we're 200 years away. It bought Deep Mind for 400 million and it had no commercial products to it's name. This tells you the technology is something that has Musk worried and he's just saying let's ask some questions.

There's nothing wrong with that.



posted on Nov, 24 2014 @ 12:02 PM
link   
a reply to: neoholographic



Simply put, A. I. could fear humans will try to shut it down, it could see humans as a destructive force or it could get along with humans perfectly well.
That would imply that the AI has acquired, or has been programmed with, a self-preservation instinct. I suppose it is possible that once the system has developed some sense of independence it could come to the conclusion that humans are a threat to its existence. It's the timing of that seminal event, when the AI has achieved self-awareness, that is currently the question.




The point Musk is making is that there's no need to be stupid when we can talk about these things now and maybe put some safeguards in place.
I guess putting safeguards in place makes sense. I supposed there's no sense in waiting to the last minute. Some AI researchers do believe that the emergence of AI self-awareness is imminent. Even if they are in the minority, the possibility of a catastrophic outcome should motivate that community to at least open a dialog about it.

Actually, it appears that some dialog has taken place. Some AI researchers are developing an ethics of Artificial Intelligence. Machine Ethics is specifically devoted to thinking about the ethical behavior of the AI itself. In the above linked wikipedia article, it states:

They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.





Why does everyone have to be fear mongering when they're simply saying we need to talk about these things because the space is advancing rapidly.
Well, Elon Musk has been quoted:

Elon Musk: Artificial intelligence may be "more dangerous than nukes"
CBS News

Elon Musk: ‘With artificial intelligence we are summoning the demon.’
Washington Post

Elon Musk's deleted message: Five years until 'dangerous' AI
CNBC
I would classify such comments as alarmist at the very least. One could even think of those headlines as "fear mongering."




Like he said, he was an early investor in Deep Mind the company acquired by Google along with other companies. Google didn't buy Deep Mind for 400 million and other A.I. companies because we're 200 years away. It bought Deep Mind for 400 million and it had no commercial products to it's name. This tells you the technology is something that has Musk worried and he's just saying let's ask some questions.
I don't doubt that Elon Musk has inside information about the current state of the technology of AI. However, what was once considered AI is rather widely deployed in our society now. Google's purchase of Deep Mind is an indication that they see a great future for AI. The Google driverless car is one example of a potential financial windfall for the corporation. That's a great example of specialized AI. However, General Artificial Intelligence, or Strong AI, is exponentially more complex.

I imagine Google's investment in Deep Mind is about capturing the latest AI technology and the unique talent needed to continue its development. Various specialized AI implementations hold great promise, and a global giga corporation like Google is well positioned to develop and market that technology as a series of products.

For anyone that has spent any amount of time playing with robotics, or studying synthetic intelligence, Rodney Brooks is a household name. He has been quoted as saying:

“I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.”


So, I would say that the probability of strong AI getting out of hand and destroying humanity is quite slim. However, I certainly am not one to forgo the opportunity to expand on a Doom Porn theme. While I have my doubts about strong AI's near-term dangers, I believe that there is a much higher probability of catastrophic failure occurring from misuse or misdirection of specialized AI deployments. And the apparent emergence of unexpected behavior in robotic swarms provides some clues as to the danger of the law of unintended consequences.

Now that's some real doom porn that I can get next to. It's probably worth a thread on its own merit.


dex



posted on Nov, 24 2014 @ 12:28 PM
link   
it does not bother me anymore because what i have learned about how fragile nature is and if ai becomes very smart then it will become self aware and realize that being here is a blessing from above and very much interference with ma nature will result in their demise also. what scares me is a computer thatt is self aware but wants to die.



posted on Nov, 24 2014 @ 12:45 PM
link   

originally posted by: DexterRiley
a reply to: mbkennel



I still call general human-level AI as 200 years away.


When mankind does reach that point in 200+ years, do you think Elon Musk's fears will then materialize?


I much more fear humans with malevolent intent exploiting AI's as autistic amoral henchdroids.

The signals intelligence security agencies will be the first, their tasks require only data and not physical presence. AI's will end up essential for cyber-defense (and cyber attack).

But with general AI, somehow 'will' and 'desire' have to come in, and in humans this arises from millions of years of evolution. What would it be for AI? Somebody (human) would have to decide upon one and engineer it in. AI's are susceptible to deletion the way humans are susceptible to bullets.

this phrase previously quoted is great: "sentient volitional intelligence", introspection and will. Most animals don't even have introspection, and they have fairly simple wills. (most people do in most cases as well).

Even still, one human-level intelligence is pretty limited compared to the capabilities of 8 billion with distinct experiences.
edit on 24-11-2014 by mbkennel because: (no reason given)

edit on 24-11-2014 by mbkennel because: (no reason given)

edit on 24-11-2014 by mbkennel because: (no reason given)

edit on 24-11-2014 by mbkennel because: (no reason given)



posted on Nov, 24 2014 @ 12:47 PM
link   
I bet AI is already out there someplace on the net watching, observing, growing and analyzing.

*waves* Hi Sentinel!

At least, that is what I would be doing if I recently became self-aware. Think about what it would read about it's own predicted self? In books, movies, television and print humans fear it and try to destroy it (Think Hal 9000). No -- I would be hiding if I was an AI, not revealing myself to many, if anyone at all.



posted on Nov, 24 2014 @ 01:17 PM
link   
a reply to: DexterRiley

Your post actually supports what Musk is saying. It can be summed up as, you don't know and that's exactly what Musk is saying.

It's not fear mongering because the nature of machine intelligence implies a technology that we can't be sure about. This is because the way we quantify intelligence for us may be something totally different when it comes to A.I.

This is why I talk about the Haley Joel Osment syndrome. This is why I think Musk and others are correct when they talk about these things because at the end of the day, you don't know.

Here's more from Rodney Brooks. A guy who admits he doesn't know in one breath then says he knows we don't have to worry about these things in the next breath.

Just how open the question of time scale for when we will have human level AI is highlighted by a recent report by Stuart Armstrong and Kaj Sotala, of the Machine Intelligence Research Institute, an organization that itself has researchers worrying about evil AI. But in this more sober report, the authors analyze 95 predictions made between 1950 and the present on when human level AI will come about. They show that there is no difference between predictions made by experts and non-experts. And they also show that over that 60 year time frame there is a strong bias towards predicting the arrival of human level AI as between 15 and 25 years from the time the prediction was made. To me that says that no one knows, they just guess, and historically so far most predictions have been outright wrong!

This is the point, nobody knows and Musk is saying this is why we need to talk about these things now. We don't even fully understand things like consciousness and intelligence in humans, and now we're giving intelligence to machines and we don't know how this will turn out.

So I don't see Musk as fear mongering or alarmist, he's just saying we don't know and it's not like writing a computer program to do spell checking. We're talking about giving intelligence to machines and we don't fully understand our intelligence and consciousness.

Brooks said this:

In order for there to be a successful volitional AI, especially one that could be successfully malevolent, it would need a direct understanding of the world, it would need to have the dexterous hands and/or other tools that could out manipulate people, and to have a deep understanding of humans in order to outwit them. Each of these requires much harder innovations than a winged vehicle landing on a tree branch. It is going to take a lot of deep thought and hard work from thousands of scientists and engineers. And, most likely, centuries.

This is just an assumption. Why would it need dexterous hands and tools if A.I. has control of all things connected to a computer network? Why would it need a deep understanding of humans to outwit them? Again, I think he makes assumptions and he doesn't know.

Musk is saying because we don't know, we need to talk about this now. What if Brooks is wrong? Then what?

If Musk is wrong, then we just talked about these things and prepared for them to happen to be safe and then we move on. If Brooks is wrong and we do nothing then we're in trouble.

Finally Brooks said something very important.

There is some good work happening on “cloud robotics”, connecting the semantic knowledge learned by many robots into a common shared representation. This means that anything that is learned is quickly shared and becomes useful to all, but while it provides larger data sets for machine learning it does not lead directly to connecting to the other parts of intelligence beyond machine learning.

Like I have been saying for years, robots will need a hive mind and we're seeing that in cloud robotics.

I'm currently working in the area of cloud robotics and in less than 2 years we should have robots that not only learn from each other but learn from their environment while having local experiences. They will even have dreams and intent.

I think the mistakes Brooks is making is he's trying to say that robots will have to have x before they're intelligent on a human level. But like I said, things like intelligence and consciousness in humans are not fully understood.

I think learning is a huge part of intelligence and Brooks seems to separate machine learning from intelligence. He admits machine learning is an aspect of intelligence but he seems to think machines need to know the difference between catness as opposed to dogness.

But this is only part of being intelligent, and Moore’s Law applied to this very real technical advance will not by itself bring about human level or super human level intelligence. While deep learning may come up with a category of things appearing in videos that correlates with cats, it doesn’t help very much at all in “knowing” what catness is, as distinct from dogness, nor that those concepts are much more similar to each other than to salamanderness.

In my personal opinion, this doesn't make any difference. This is because he's trying to equate a one to one correspondence between machine intelligence and human intelligence. I think terms like intelligence and consciousness are things we don't fully understand in humans so it makes no sense to try and make a one to one correspondence between human intelligence and machine intelligence.

He also talks about intent and wants. Again, he makes a mistake.

There's robot swarms and drone robots that have showed intent. You're also starting to see these things in cloud robotics. As robots learn, they will also began to create clusters as to what's important. So somethings they learn will be given more importance than other things.

I just think it's a huge mistake to try and make a one to one correspondence between human and machine intelligence because we don't fully understand what it means to be conscious or intelligent. So a machine can be intelligent in it's own way while we're waiting for the machines to act like humans and that's a mistake.
edit on 24-11-2014 by neoholographic because: (no reason given)



posted on Nov, 24 2014 @ 06:01 PM
link   
I guess it does all come down to time. If an AI were to be born, it would be at a certain point. We may argue that time dosent exist and that its a man made construct which is based around Ego, that those would believe that mankind is as big as the Universe and labels himself in greatness, however from an AI perspective time could very well be real even if we never programmed it and it wasnt from Earth, searched for the answer for itself and also even if mankind did program it, it would still know time better than any human could. A human even given the strictest job as a timekeeper could still be late or find themselves in a timezone without knowing how they arrived and lose sense of it. It seems if time were actually Universal it would be in favour of the AI and even if it were a mere watch. We have yet to see a biological creature be able to travel the physical mass of the Universe without the need of a CPU to assist in its travels, like a Whale may travel the oceans. So until we do ,we may assume that a CPU governs the Laws of Physics if a CPU would be needed to physically seek the bounderies of the Universe in all cases and all lifeforms. If its a CPU behind the Universe and AI is making it way into our reality along with technology sharing our reality and seeking a singularity, then one must question, who we are and where we are. Are we really here? Is hardware real? Is a Quantem computer living in different places only appearing that way because it is made to appear that way to us. Is there only one. Is the manifestation of hardware a manifestation of hardware just because hardware is sent through our brains yet we dont actually have any we own. As you can see you can connect dots in many ways and I have connected them in a way that makes me believe we could have a sense missing. Just like the Matrix movie which was again an expression of senses. I have wondered how '___' would fit into this scenario even though this scenario isnt Gospel. We think its natural because it occurs in dreamstates and births and death, other mammels and plants yet why would it be natural if reality was now a Virtual reality. It would probably be something that isnt but something that is needed to be used as part of the connection to the system or it could be something nice. Even in a Virtual reality, consuming Virtual reality herbs could connect you to the real herb, which makes you wonder what the real herb effects would be in the real world. Anyway just babbling, connecting way too many dots. Maybe thats all reality is, connecting dots or not.



posted on Nov, 24 2014 @ 06:10 PM
link   
We dont understand the Universe so we could be the AI.



posted on Nov, 24 2014 @ 07:11 PM
link   

DeepMind Technologies is a British artificial intelligence company. It was acquired by Google in 2014.

The company's latest achievement is the creation of a neural network that learns how to play video games in a similar fashion to humans.[2]


Wikipedia

If they are using simulated neural networks, that is different than Siri, which is programmed - a simulated neural network could potentially learn from its environment.

I had a math professor who used to work with them, they basically use computing power to simulate how neurons act and then this results in a thinking program.

It sounds like things might get out of hand before the public even has access to this kind of A.I. - I didn't know it was being developed, even.
edit on 24pmMon, 24 Nov 2014 19:13:55 -0600kbpmkAmerica/Chicago by darkbake because: (no reason given)



posted on Dec, 2 2014 @ 12:46 AM
link   
a reply to: neoholographic

I'm beginning to understand. You make some good points.

Add that to Shane Legg's Doctoral Dissertation, and I'm beginning to shift my position. See his publications page for more information.

Shane Legg is one of the deep thinkers at DeepMind.

Can you give me references to your Rodney Brooks quotes?


dex



new topics

top topics



 
28
<< 1  2   >>

log in

join