It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Quantum mechanics is one of the weirdest fields in science. Even physicists find it tough to wrap their heads around it. As Michael Merrifeld of the University of Nottingham says, "If it doesn't confuse you, that really just tells you that you haven't understood it."
This makes designing experiments very tricky. However, these experiments are vital if we want to develop quantum computing and cryptography. So a team of researchers decided, since the human mind has such a hard time with quantum science, that maybe a "brain" without human preconceptions would be better at designing the experiments.
Melvin, an algorithm designed by Anton Zeilinger and his team at the University of Vienna, has proven this to be the case. The research has been published in the journal Physical Review Letters.
So far, the team says, it has devised experiments that humans were unlikely to have conceived. Some that work in ways that are difficult to understand. They look very different from human-devised experiments.
"I still find it quite difficult to understand intuitively what exactly is going on," said Krenn.
The team ran Melvin through its paces with Greenberger-Horne-Zeilinger (GHZ) states, in which more than two photons are entangled (you can read more about it here if you're interested or if you're AI tasked with designing experiments). Melvin devised 51 experiments that resulted in entangled states, one of which delivered the GHZ state.
The AI isn't quite ready to replace humans yet. A human mind is still required to make sense of the results of Melvin's experiments. It does beg the question: What happens when Melvin's outcomes become too weird for humans to understand?
originally posted by: Treespeaker
When computers grow cells is when I would start panicking, I still think nature wins out.
originally posted by: Restricted
I question the wisdom of allowing AI to control cryptography.
Isn't it possible that such a machine will ultimately write us right out of the equation and permanently lock us out of our own devices, et al, which we would obviously never be able to unlock?
originally posted by: Blue Shift
originally posted by: Treespeaker
When computers grow cells is when I would start panicking, I still think nature wins out.
The two biggest impediments to conscious, living AI are: 1) giving the computer a functional buffer between its operating system and its "sensory" system that will allow it to improve its own programming on the fly to better accomplish its programmed tasks -- which logically could result in it modifying or changing those tasks, and 2) giving it control over a physical manufacturing process that will allow it to not only improve its own programming, but also "breed" physical machines that improve on its own design.
Quantum mechanics predicts a number of, at first sight, counterintuitive phenomena. It therefore remains a question whether our intuition is the best way to find new experiments. Here, we report the development of the computer algorithm Melvin which is able to find new experimental implementations for the creation and manipulation of complex quantum states. Indeed, the discovered experiments extensively use unfamiliar and asymmetric techniques which are challenging to understand intuitively. The results range from the first implementation of a high-dimensional Greenberger-Horne-Zeilinger state, to a vast variety of experiments for asymmetrically entangled quantum states—a feature that can only exist when both the number of involved parties and dimensions is larger than 2. Additionally, new types of high-dimensional transformations are found that perform cyclic operations. Melvin autonomously learns from solutions for simpler systems, which significantly speeds up the discovery rate of more complex experiments. The ability to automate the design of a quantum experiment can be applied to many quantum systems and allows the physical realization of quantum states previously thought of only on paper.
Google's acquisition of DeepMind Technologies last month was a huge deal. By snatching up the artificial intelligence company, Google signified a growing interest in deep learning. But what does this buzzword actually mean?
Deep learning is an emerging topic in artificial intelligence. A subcategory of machine learning, deep learning deals with the use of neural networks to improve things like speech recognition, computer vision, and natural language processing. It's quickly becoming one of the most sought-after fields in computer science. But how did it turn from an obscure academic topic into one of tech's most exciting fields—in under a decade?
The technological singularity is a hypothetical event in which artificial general intelligence (constituting, for example, intelligent computers, computer networks, or robots) would be capable of recursive self-improvement (progressively redesigning itself), or of autonomously building ever smarter and more powerful machines than itself, up to the point of a runaway effect—an intelligence explosion[1][2]—that yields an intelligence surpassing all current human control or understanding. Because the capabilities of such a superintelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence.
originally posted by: neoholographic
The Singularity is near!
...
What happens when Melvin's outcomes become too weird for humans to understand?
...
originally posted by: TEOTWAWKIAIFF
Until then the only AI out there in the real world are the bots in Halo!