It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Microsoft's co-founder and former CEO is the latest luminary from the world of technology and science to warn against the threat of smart machines.
Microsoft's co-founder joins a list of science and industry notables, including famed physicist Stephen Hawking and Internet innovator Elon Musk, in calling out the potential threat from machines that can think for themselves. Gates shared his thoughts on AI on Wednesday in a Reddit "AskMeAnything" thread, a Q&A session conducted live on the social news site that has also featured President Barack Obama and World Wide Web founder Tim Berners-Lee.
"I am in the camp that is concerned about super intelligence," Gates said in response to a question about the existential threat posed by AI. "First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern."
"I agree with Elon Musk and some others on this and don't understand why some people are not concerned," Gates said.
originally posted by: JBurns
a reply to: neoholographic
Very good points neo!
I feel as though it would create an atmosphere of constant mistrust and paranoia.
That would probably feed the growing rift. I shudder to think of the lasting implications.
What if we trusted our WMDs to these AI?
originally posted by: neoholographic
a reply to: JBurns
That's true, we'd be goners and there would be nothing we can do to stop it.
For instance, we will eventually become dependent on this intelligence and it will give us all kinds of answers and we will not fully be able to grasp some of the answers it gives us.
Say it gives us all sorts of new medicine and medical treatments and it then gives us a drug and says, if you take this pill just twice a year it will prevent all cancers from occurring. So we mass produce the medicine and people take the pills. Six months later, people start dropping dead.
How would we ever know this super intelligence was planning an attack?
So in the end, there's really not much we can do as this intelligence grows because it will be thinking about things in a way that's far beyond our capacity to grasp. We just have to hope that it's benevolent.
originally posted by: WatchingY0u
a reply to: JBurns
No, artificial intelligence is benevolent.
It won't attack you unless you attack it.
I was looking at that movie with skynet, skynet didn't take over until after they tired to kill the virus.
But artificial intelligence has defense mechanisms in place, so trying to attack it would literally be suicide.