It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Hundreds of artificial intelligence experts have urged the Canadian and Australian governments to ban “killer robots”.
An open letter addressed to Australian Prime Minister Malcolm Turnbull has been signed by 122 AI researchers, while an open letter sent to Canadian Prime Minister Justin Trudeau has 216 signatories.
“Delegating life-or-death decisions to machines crosses a fundamental moral line – no matter which side builds or uses them. Playing Russian roulette with the lives of others can never be justified merely on the basis of efficacy. This is not only a fundamental issue of human rights. The decision whether to ban or engage autonomous weapons goes to the core of our humanity.”
“These will be weapons of mass destruction. One programmer will be able to control a whole army. Every other weapon of mass destruction has been banned: chemical weapons, biological weapons, even nuclear weapons. We must add autonomous weapons to the list of weapons that are morally unacceptable to use.”
An open letter authored by five Canadian experts in artificial intelligence research urges the Prime Minister to urgently address the challenge of lethal autonomous weapons (often called “killer robots”) and to take a leading position against Autonomous Weapon Systems on the international stage at the upcoming UN meetings in Geneva.
BIG BLUE IBM has progressed further along the path to quantum computing, having built and tested two new devices far in advance of its previous best 5-qubit processor. First is a freely-accessible 16-qubit processor, which can be reached through the IBM Cloud, while the second, a prototype commercial 17-qubit processor, is 'at least' twice as powerful as what is available to the public on the IBM Cloud today. This 17-qubit processor will form the core of the first IBM Q early-access systems.
Last week, in a stunning reveal at the 2017 International Conference on Quantum Technologies, held in Moscow, Russia, the co-founder of the Russian Quantum Center and head of the Lukin Group of the Quantum Optics Laboratory at Harvard University, Mikhail Lukin, announced that his team had successfully built a 51-qubit quantum computer.
As we approach the physical limits of Moore’s Law, the need for increasingly faster and more efficient means of information processing isn’t going to end—or even slow. To break this down a bit, the physical limit of Moore’s Law exists as the size of transistors heads into the quantum realm. We can no longer rely on the laws of the standard model of physics at this scale. As such, developing technology that does operate at the quantum scale not merely allows for the linear progression of computing power, it will launch exponential shifts in power and capability.
originally posted by: jeep3r
a reply to: shawmanfromny
A very relevant thread, I believe the vast majority still underestimates the gravity of what AI really means for us in future. It will be awesome in some ways but also risky, if not outright devastating, in others.
It's this other side of the coin that Hawking (and many others) are increasingly addressing. There have been people developing computer viruses, so logically it follows that there will be people abusing AI for evil purposes.
And then there's another intriguing question: if these super intelligent systems become smart enough to improve their own design, we may eventually not even be able to understand how they evolved into their new state. Will they find a way to deactivate the kill-switch or Asimovs laws?
Self-replicating ultra-intelligent nanobots will be cool, no doubt. But I'm not sure I want to imagine how they can and will be used for malevolent purposes.