It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

AI Could Lead To Third World War, Elon Musk Says

page: 3
14
<< 1  2   >>

log in

join
share:

posted on Sep, 5 2017 @ 05:31 AM
link   
a reply to: SRPrime




We're basically trying to make the worlds smartest artificial scientist, that would be able to produce what scientists produce in 100 years in a matter of weeks.


If we succeed only modestly with that goal, we will have produced a machine that can master networking protocols very fast and thoroughly. Any advanced, self-learning AI would be able to network and learn at a scary rate. The only way to control it would be to limit its processing power, but by the time we have self-learning AI that knows how to network, we would at some point thereafter have AI that knows how to create remote instances of itself , in the cloud, or remote instances of AI that it controls. The sky's the limit, literally. The Skynet scenario is plausible, but not yet.

As long as we can pull the physical plug, we can remove the danger. But some day, pulling the plug even may not be that easy. There was no plug to pull on Wall-E, and he wasn't a genius either.



posted on Sep, 5 2017 @ 06:11 AM
link   
a reply to: Namdru

Look up zombie networks. As soon as a strong ai were unleased on the internet theres pretty much no turning back.



posted on Sep, 5 2017 @ 07:38 AM
link   

originally posted by: DBCowboy
a reply to: Namdru

With such a big IQ, not sure how Musk could have missed the point.

An AI that will destroy humanity is nothing more than anthropomorphizing something will the worst of humanities traits.

I disagree with the premise.


The premise being what, really? That humans are not capable of self-destruction? Or that artificial intelligence is not capable of being just as diabolical as its creators?

Disagreeing with those premises would be equivalent to being ignorant of human nature, IMO.



posted on Sep, 5 2017 @ 07:59 AM
link   
a reply to: SRPrime




Elon Musk is an engineer, he's not a computer programmer, he doesn't know what he's talking about, it's outside of his field. He read too many dystopian future novels, he's be alarming off about the perils of A.I. for at least 10 years now, probably more than that, realistically. We'd never allow an A.I. to just launch a preemptive strike all on it's own, it would never be connected in such a way to even be possible, let alone plausible, it's fantasy.


I hope you're right about that last point. It's not the timeline that concerns us. It's the inevitability of sentient AI.

Various elements of AI exist already and are only getting more powerful. People in ancient India, and Europeans in the 19th-century, were already describing television. Now look we have live video in our pockets. Sentient AI, or self-learning AI, is just a breath or two away in historical time, if not already a reality. Like growing a slime mold from a bacterial culture, sentience is going to be recognized as a natural feature of self-organizing systems, including AI digitally-based ones, in time. Mark my words.

As a rule of thumb -- and this I heard from someone working in the "field", as it were -- the most privileged and powerful tech is about 20-30 years ahead of the cutting edge of consumer tech. Think about what that means, if it's true. I heard it was true of processing speed and natural-language processing in the 1970's. Now that is a genuine bit of above-top-secret info that would never be disclosed officially. Imagine if the NSA came forward and said, "We had 230MHz+ processing power per core in the mid 70's and real-time natural language processing ability".

They wouldn't dare because imagine what one could extrapolate from that, about what "metadata" means now. It means, basically, everything. Hello Big Brother!

You're wrong about Elon Musk not being a computer programmer. Dude seriously kicks ass as a coder. He's an effing genius, not a "professional".



posted on Sep, 5 2017 @ 11:46 AM
link   

originally posted by: audubon

A fair and reasonable observation, all things considered. But my broader point was that Musk is a bit of a... well... ok, a crank. So while he raises an interesting ethical point, my response (in a nutshell) is: "AI is a field that has barely advanced an inch since it was first conceived, and while Elon Musk is a rich and successful individual does that mean we should take his personal fixations very seriously?"


In this situation?

I think that absolutely, yes, we should take "his personal fixations seriously." Not because of who he is, or what he is fixated on, but despite those things since he has a point.

In other words, I think we should examine the topic on its own merits rather than judging it on the personality who is bringing it up. I don't like the argument from authority either way, honestly.

It'd be the same with something like human cloning. Even if Ronald McDonald was the one voicing his concerns, I think we should have some very serious conversations on the topic.

I'm actually in favor of pretty much any advancement. If only because it will be accomplished by someone, somewhere regardless of the "rules." Forcing things underground never has good results. But, perhaps the course of things can be influenced by the wisdom in open, coherent discussions.



posted on Sep, 8 2017 @ 02:00 AM
link   

originally posted by: Namdru
When Elon Musk talks, I listen. Notice how, in the news item, he uses the expression "at gunpoint".

Elon Musk is a billionaire industrialist. In my opinion, he is an intellectually honest man. I think he is trying to tell us something important. Elon Musk of all people -- he being the most successful living applied scientist in the world, by my reckoning -- ought to know about these things. An IQ above 160, being a billionare, and not being a dysfunctional paranoic, will tend to do that for a guy.

That is why I think this an important news item. It makes me wonder how Elon Musk keeps his own research from prying eyes. Even Tony Stark can't keep the competition out of his home and laboratory.

AI AI Could Lead To Third World War, Elon Musk Says



I believe that Elon was (maybe inadvertently) exposed to some general AI. In his dealings with the government (maybe NASA or DARPA) he must have seen something that made him very uncomfortable. Elon has the brainpower to extrapolate and see connections that others simply can't see. If his IQ is really around 160 we're talking about a rare brain structure.

I take his warnings regarding general AI very, very seriously.



posted on Sep, 8 2017 @ 02:18 AM
link   
a reply to: audubon




Yeah, but... AI doesn't exist.

At the moment, the most sophisticated artificial intelligence program in existence is capable of consistently winning a Japanese boardgame. And that is not really much of an advance on the computerised chess programs that existed 30 years ago.


That isn't entirely accurate.

Don't forget the DIA, ONI (Office of Naval Intelligence), etc.

And most importantly - DARPA.

The tech that exists for military purposes (or other purposes or other purposes via a link with the miltary), is in some cases 10+ years ahead of what's out there for the "public" to utilize.

I work with some of it every day.








edit on 9/8/2017 by Riffrafter because: (no reason given)



posted on Sep, 8 2017 @ 06:23 AM
link   
a reply to: AllIsOne


I believe that Elon was (maybe inadvertently) exposed to some general AI. In his dealings with the government (maybe NASA or DARPA) he must have seen something that made him very uncomfortable. Elon has the brainpower to extrapolate and see connections that others simply can't see. If his IQ is really around 160 we're talking about a rare brain structure.

I take his warnings regarding general AI very, very seriously.


Well I'm glad someone understands my basic point, which is, EM is not necessarily right, but he probably has good reason (ie., something someone from NASA, DARPA or another acronym-designated agency showed him) to be nervous. That was my point. Mof us are simply not that smart, or well-informed.



posted on Sep, 16 2017 @ 09:24 AM
link   
My IQ is 142 (according to the military) so I'm not up to par with Elon. But, just because our computers are wires, circuits, and electricity doesn't mean somebody else in the universe isn't much further along.

I mean, when you really think about it we are all emotional record players repeating the same tune over and over again until we are played out. What great intellectual capacity do humans really have. What can you make? Can you carve a processor out of a tree branch? When it comes right down to it we are all pretty limited in our capabilities. A machine of a different sort.

And besides that. I have read a lot of NDE (Near Death Experiences) and from "that perspective" it's not too far from the truth. We are what we would call a soul inhabiting a temporary machine. Don't take my word for it though. Go read a few thousand and get back to me.


originally posted by: audubon
Yeah, but...

AI doesn't exist. At the moment, the most sophisticated artificial intelligence program in existence is capable of consistently winning a Japanese boardgame. And that is not really much of an advance on the computerised chess programs that existed 30 years ago.

And Elon Musk is a bit of a fruitcake, who believes that we are living in a Matrix-style simulation and has embarked on research aimed at escaping from this simulation. (This is particularly stupid, since it unavoidably means that Mr Musk thinks that a purely digital/conceptual entity - i.e., a computer-simulated person - could exist in a non-simulated environment).

So yeah, it's an interesting topic but not one with much real-world relevance. Don't start stockpiling tinned food just yet.



posted on Sep, 16 2017 @ 09:32 AM
link   
Destination Void, by Frank Herbert (guy who wrote Dune). I really liked the book and the story is exactly what you are talking about. To a "T".



originally posted by: Namdru
a reply to: SRPrime




We're basically trying to make the worlds smartest artificial scientist, that would be able to produce what scientists produce in 100 years in a matter of weeks.


If we succeed only modestly with that goal, we will have produced a machine that can master networking protocols very fast and thoroughly. Any advanced, self-learning AI would be able to network and learn at a scary rate. The only way to control it would be to limit its processing power, but by the time we have self-learning AI that knows how to network, we would at some point thereafter have AI that knows how to create remote instances of itself , in the cloud, or remote instances of AI that it controls. The sky's the limit, literally. The Skynet scenario is plausible, but not yet.

As long as we can pull the physical plug, we can remove the danger. But some day, pulling the plug even may not be that easy. There was no plug to pull on Wall-E, and he wasn't a genius either.



posted on Sep, 16 2017 @ 09:40 AM
link   
a reply to: SRPrime




The only people who fear A.I. are people who don't understand technology. Everything isn't connected, you can't just hack the power grid, the traffic lights, the nukes, the air planes, the missile systems, like -- that's not real life, that's hollyweird.


Really?
A quick search disagrees

www.bloomberg.com... ers-took-down-a-power-grid

www.wired.com...

www.express.co.uk... ain-hacked-cyber-attacks

and what about those driverless vehicles that were hacked?

If it's connected to the web it's hackable and to pretend otherwise is disingenuous at best.




top topics



 
14
<< 1  2   >>

log in

join