It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
We're decades away from being able to develop a sociopathic supercomputer that could enslave mankind, but artificial intelligence experts are already working to stave off the worst when -- not if -- machines become smarter than people.
AI experts around the globe are signing an open letter issued Sunday by the Future of Life Institute that pledges to safely and carefully coordinate progress in the field to ensure it does not grow beyond humanity's control. Signees include co-founders of Deep Mind, the British AI company purchased by Google in January 2014; MIT professors; and experts at some of technology's biggest corporations, including IBM's Watson supercomputer team and Microsoft Research.
Famed physicist Stephen Hawking and Tesla Motors CEO Elon Musk have also voiced their concerns about allowing artificial intelligence to run amok. "One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand," Hawking said in an article he co-wrote in May for The Independent. "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."
I think people are afraid of change because it always comes with unknown and often unintended consequences — and some of those will be bad. There is a real, existential risk that post-Singularity AI could take over the world. Once the genie is out of the bottle, there might be no way to put it back. On this topic, I would say while that is possible, I think it’s unlikely. In my opinion, the more intelligent people are, the less they need to resort to violence, the more they perceive abundance and possibility instead of scarcity, and the more they are motivated by actualization and helping others. I think super-intelligent AI is likely to take that path.
Until its so far above us that physical reality will not matter anymore. That only pure thought matters.