It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
We're basically trying to make the worlds smartest artificial scientist, that would be able to produce what scientists produce in 100 years in a matter of weeks.
originally posted by: DBCowboy
a reply to: Namdru
With such a big IQ, not sure how Musk could have missed the point.
An AI that will destroy humanity is nothing more than anthropomorphizing something will the worst of humanities traits.
I disagree with the premise.
Elon Musk is an engineer, he's not a computer programmer, he doesn't know what he's talking about, it's outside of his field. He read too many dystopian future novels, he's be alarming off about the perils of A.I. for at least 10 years now, probably more than that, realistically. We'd never allow an A.I. to just launch a preemptive strike all on it's own, it would never be connected in such a way to even be possible, let alone plausible, it's fantasy.
originally posted by: audubon
A fair and reasonable observation, all things considered. But my broader point was that Musk is a bit of a... well... ok, a crank. So while he raises an interesting ethical point, my response (in a nutshell) is: "AI is a field that has barely advanced an inch since it was first conceived, and while Elon Musk is a rich and successful individual does that mean we should take his personal fixations very seriously?"
originally posted by: Namdru
When Elon Musk talks, I listen. Notice how, in the news item, he uses the expression "at gunpoint".
Elon Musk is a billionaire industrialist. In my opinion, he is an intellectually honest man. I think he is trying to tell us something important. Elon Musk of all people -- he being the most successful living applied scientist in the world, by my reckoning -- ought to know about these things. An IQ above 160, being a billionare, and not being a dysfunctional paranoic, will tend to do that for a guy.
That is why I think this an important news item. It makes me wonder how Elon Musk keeps his own research from prying eyes. Even Tony Stark can't keep the competition out of his home and laboratory.
AI AI Could Lead To Third World War, Elon Musk Says
Yeah, but... AI doesn't exist.
At the moment, the most sophisticated artificial intelligence program in existence is capable of consistently winning a Japanese boardgame. And that is not really much of an advance on the computerised chess programs that existed 30 years ago.
I believe that Elon was (maybe inadvertently) exposed to some general AI. In his dealings with the government (maybe NASA or DARPA) he must have seen something that made him very uncomfortable. Elon has the brainpower to extrapolate and see connections that others simply can't see. If his IQ is really around 160 we're talking about a rare brain structure.
I take his warnings regarding general AI very, very seriously.
originally posted by: audubon
Yeah, but...
AI doesn't exist. At the moment, the most sophisticated artificial intelligence program in existence is capable of consistently winning a Japanese boardgame. And that is not really much of an advance on the computerised chess programs that existed 30 years ago.
And Elon Musk is a bit of a fruitcake, who believes that we are living in a Matrix-style simulation and has embarked on research aimed at escaping from this simulation. (This is particularly stupid, since it unavoidably means that Mr Musk thinks that a purely digital/conceptual entity - i.e., a computer-simulated person - could exist in a non-simulated environment).
So yeah, it's an interesting topic but not one with much real-world relevance. Don't start stockpiling tinned food just yet.
originally posted by: Namdru
a reply to: SRPrime
We're basically trying to make the worlds smartest artificial scientist, that would be able to produce what scientists produce in 100 years in a matter of weeks.
If we succeed only modestly with that goal, we will have produced a machine that can master networking protocols very fast and thoroughly. Any advanced, self-learning AI would be able to network and learn at a scary rate. The only way to control it would be to limit its processing power, but by the time we have self-learning AI that knows how to network, we would at some point thereafter have AI that knows how to create remote instances of itself , in the cloud, or remote instances of AI that it controls. The sky's the limit, literally. The Skynet scenario is plausible, but not yet.
As long as we can pull the physical plug, we can remove the danger. But some day, pulling the plug even may not be that easy. There was no plug to pull on Wall-E, and he wasn't a genius either.
The only people who fear A.I. are people who don't understand technology. Everything isn't connected, you can't just hack the power grid, the traffic lights, the nukes, the air planes, the missile systems, like -- that's not real life, that's hollyweird.