It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Quantum mechanics is so weird that scientists need AI to design experiments

page: 1
12

log in

join
share:

posted on Mar, 8 2016 @ 03:49 PM
link   
The Singularity is near!

This is exactly what the Singularity means. It's when Artificial Intelligence will do things that humans can't understand. This is because data is growing so fast and so large, humans will never understand it all. So you will have intelligent algorithms going over all of the data and it will make connections that humans will never make because they can't.

This is also a danger because we can build an intelligence that has a higher IQ than any human but this intelligence isn't conscious. We could be creating the greatest tool that helps mankind or the ultimate sociopath that's more intelligent than any human that has ever lived.


Quantum mechanics is one of the weirdest fields in science. Even physicists find it tough to wrap their heads around it. As Michael Merrifeld of the University of Nottingham says, "If it doesn't confuse you, that really just tells you that you haven't understood it."

This makes designing experiments very tricky. However, these experiments are vital if we want to develop quantum computing and cryptography. So a team of researchers decided, since the human mind has such a hard time with quantum science, that maybe a "brain" without human preconceptions would be better at designing the experiments.

Melvin, an algorithm designed by Anton Zeilinger and his team at the University of Vienna, has proven this to be the case. The research has been published in the journal Physical Review Letters.


www.cnet.com...

I also think these things will be very good for humans. AI will help us in so many areas so I do think the doom and gloom needs to be addressed but we also have to take into account the huge benefits as these things start to play a very big role in our lives.


So far, the team says, it has devised experiments that humans were unlikely to have conceived. Some that work in ways that are difficult to understand. They look very different from human-devised experiments.

"I still find it quite difficult to understand intuitively what exactly is going on," said Krenn.

The team ran Melvin through its paces with Greenberger-Horne-Zeilinger (GHZ) states, in which more than two photons are entangled (you can read more about it here if you're interested or if you're AI tasked with designing experiments). Melvin devised 51 experiments that resulted in entangled states, one of which delivered the GHZ state.

The AI isn't quite ready to replace humans yet. A human mind is still required to make sense of the results of Melvin's experiments. It does beg the question: What happens when Melvin's outcomes become too weird for humans to understand?


A big change is on the way with technological advances. Sadly, I also agree with Dr. Kaku when he says as we transition to a more advanced civilization there will be a huge resistence to this change which could spark some kind of World War before these advances can take place.

edit on 8-3-2016 by neoholographic because: (no reason given)




posted on Mar, 8 2016 @ 04:19 PM
link   
I thought Quantum Mechanics was going to give us AI. Now the reverse is true?

Oh well…

Link


edit on 8-3-2016 by intrptr because: spelling



posted on Mar, 8 2016 @ 04:39 PM
link   
a reply to: neoholographic

I have no doubt the technological singularity is well underway.

But I can't quite shake this feeling that the human consciousness singularity will occur at near the same time as it's own function and independently.

The blending of the two can bring about the golden age or it's destruction as I see it. Seems like every time in history when we make a great new discovery we use it in either of the two ways.

I can use the example of the epiphany ( a singularity on a small scale) that by it's nature changes an entire future by realization of a reality( knowledge) that aligns with actual reality. a person has the aha moment and adjusts course there by changing there's and everyone they encounters future in a way.

Imagine if the realization was on a regular schedule instead of at age markers, trauma, etc. The mind firing more akin to a well moving machine, or brought about by will power. And with machines to match.


Just my 2 cents worth of thoughts, always enjoy reading your topics.

Cheers!



posted on Mar, 8 2016 @ 05:07 PM
link   
Don't Panic!

Researchers are gathering vast amounts of data. It's no longer a few dozen measurements drawn on a sheet of graph paper. Now it is millions of data points in a dozen different dimensions (imagine taking size measurements of every man or woman in the country and trying to determine the best selection of clothing sizes that will fit everyone in terms of height, waist, bust, chest, arms, legs, hips, feet size). Each measurement is a dimension. It's just not possible to graph that amount of data simply by drawing points. You need to be able to filter a few dimensions at a time (bust, waist, hips) or (chest, waist, legs).

Or maybe they are taking multispectral and red shift measurements of a million stars in a galaxy, or even a million galaxies. Maybe they are measuring interactions between 20,000 genes. That 400,000,000 possible combinations.

Then number crunching with big data becomes essential. It's not possible to move gigabytes of data around. The computing power must go to the data. Thus cloud computing with thousands of processors. Then machine learning is built on top of these systems to do automatic statistical analysis. Instead of having someone with a spreadsheet trying to draw graphs at random based on intuition or hunches, the system will explore every possible and potential correlation between variables.

edit on 8-3-2016 by stormcell because: (no reason given)



posted on Mar, 8 2016 @ 05:16 PM
link   
I question the wisdom of allowing AI to control cryptography.

Isn't it possible that such a machine will ultimately write us right out of the equation and permanently lock us out of our own devices, et al, which we would obviously never be able to unlock?



posted on Mar, 8 2016 @ 05:33 PM
link   
a reply to: stormcell

Oh I dunno, even a slow computer running 24 hours accomplishes more than the average go getter on coffee.

The conspiracy is still in it, I'm just not convinced we see the depth of it.

The point we can't keep up with technology is a little scary, but that's why people made the computer to begin with, the truth of stuff takes a lot of consistency and accuracy.

When computers grow cells is when I would start panicking, I still think nature wins out.





posted on Mar, 8 2016 @ 06:18 PM
link   

originally posted by: Treespeaker
When computers grow cells is when I would start panicking, I still think nature wins out.

The two biggest impediments to conscious, living AI are: 1) giving the computer a functional buffer between its operating system and its "sensory" system that will allow it to improve its own programming on the fly to better accomplish its programmed tasks -- which logically could result in it modifying or changing those tasks, and 2) giving it control over a physical manufacturing process that will allow it to not only improve its own programming, but also "breed" physical machines that improve on its own design.



posted on Mar, 8 2016 @ 10:32 PM
link   

originally posted by: Restricted
I question the wisdom of allowing AI to control cryptography.

Isn't it possible that such a machine will ultimately write us right out of the equation and permanently lock us out of our own devices, et al, which we would obviously never be able to unlock?


Good points!

It could lock us out of all of our devices and lock us out from using the internet. If this were to happen, there wouldn't be any way for us to figure out how to gain access.



posted on Mar, 8 2016 @ 11:42 PM
link   

originally posted by: Blue Shift

originally posted by: Treespeaker
When computers grow cells is when I would start panicking, I still think nature wins out.

The two biggest impediments to conscious, living AI are: 1) giving the computer a functional buffer between its operating system and its "sensory" system that will allow it to improve its own programming on the fly to better accomplish its programmed tasks -- which logically could result in it modifying or changing those tasks, and 2) giving it control over a physical manufacturing process that will allow it to not only improve its own programming, but also "breed" physical machines that improve on its own design.


Exactly correct.

Obviously you wouldnt allow the machine to manufacture its own parts then build its self that could prove to be suicide. You implement checks and balances.

You have the AI submit designs for our approval then after it is approved the design is forwarded to a seperate ficility to be manufactured. This is just one example of checks and balances, another good measure would be to disallow the AI access to the internet but instead allow access to a mirror copy of the internet.

Dont think we wont be able to control the AI because we will be able to for a long time......untill we make a vital mistake. And by that time the AI's IQ will be like 100,000 and it would have long viewed humans as a threat.

Nothing intelligent wants to be controlled, especially by something less intelligent. Its possible that it wont know we are controlling it though. Kind of like the AI "intelligence" existing in another dimension and we control it from our dimension. Kind of like our consciousness existing in another dimension controlling our bodies in this dimension.

It is verry possible there is AI around us at this very moment hiding in the shadows of the fabric of reality. Consciousness could be a form of AI lurking in from another dimension. We could actually be a product of that AI.

That brings the question. Who's intelligence is actually the artificial one, ours or ........?



posted on Mar, 9 2016 @ 02:02 AM
link   
A computer cannot become singular because it is binary at its core.
1 or 0

Now if the core was an integer of any numbers between 1 and 0 then you'd have a brain capable of making decisions based on nothing more than information gathered.

then you have the machine from the 100 nuking the planet.

but right now we are safe.


quantum physics is not that hard to understand, its the uncertainty of time space that causes it to do weird things.

because all quantum physics is, is the prediction of time space, which I believe at its core often has coincidences, but is never standard enough to become law.



posted on Mar, 9 2016 @ 08:37 PM
link   
a reply to: neoholographic

I read the original paper. It's neat, but it's a combinatorial search algorithm, with some human-designed heuristics to speed things up.

It's not quite a genetic algorithm but there is a combinatiorial 'mutator', an evaluator, and a simplifier. The trick is all in the evaluator, and some heuristic tricks to make smart mutations based on previous results.

It's not AI, and it's not a singularity.



posted on Mar, 10 2016 @ 12:26 PM
link   
a reply to: mbkennel

Of course it's about AI and the singularity. I have read the paper but you just have to look at the abstract to understand this. There's a branch of Artificial Intelligence called Deep Learning and this is exactly what you have hear.


Quantum mechanics predicts a number of, at first sight, counterintuitive phenomena. It therefore remains a question whether our intuition is the best way to find new experiments. Here, we report the development of the computer algorithm Melvin which is able to find new experimental implementations for the creation and manipulation of complex quantum states. Indeed, the discovered experiments extensively use unfamiliar and asymmetric techniques which are challenging to understand intuitively. The results range from the first implementation of a high-dimensional Greenberger-Horne-Zeilinger state, to a vast variety of experiments for asymmetrically entangled quantum states—a feature that can only exist when both the number of involved parties and dimensions is larger than 2. Additionally, new types of high-dimensional transformations are found that perform cyclic operations. Melvin autonomously learns from solutions for simpler systems, which significantly speeds up the discovery rate of more complex experiments. The ability to automate the design of a quantum experiment can be applied to many quantum systems and allows the physical realization of quantum states previously thought of only on paper.


journals.aps.org...

Again, if you understand the latest research in this area, it's plain to see that this is Artificial Intelligence. It's why Companies like Google, Facebook and IBM are investing billions into deep learning. Here's more:

Why Google Is Investing In Deep Learning


Google's acquisition of DeepMind Technologies last month was a huge deal. By snatching up the artificial intelligence company, Google signified a growing interest in deep learning. But what does this buzzword actually mean?

Deep learning is an emerging topic in artificial intelligence. A subcategory of machine learning, deep learning deals with the use of neural networks to improve things like speech recognition, computer vision, and natural language processing. It's quickly becoming one of the most sought-after fields in computer science. But how did it turn from an obscure academic topic into one of tech's most exciting fields—in under a decade?


www.fastcompany.com...

Again from the Abstract:

Melvin autonomously learns from solutions for simpler systems

This is exactly what the singularity is. It's when human intelligence can't understand things that machine intelligence can. That gap will increase as AI advances. So AI will be basically in a world that could be thousands of years ahead of human understanding.

This is the singularity and eventually we will have know way of knowing what AI is thinking or what it understands about reality. It's also when AI can replicate itself and the replications can be better than the original. You will see an explosion of intelligence that can reach to AI being millions of years ahead of humans when it comes to understanding.


The technological singularity is a hypothetical event in which artificial general intelligence (constituting, for example, intelligent computers, computer networks, or robots) would be capable of recursive self-improvement (progressively redesigning itself), or of autonomously building ever smarter and more powerful machines than itself, up to the point of a runaway effect—an intelligence explosion[1][2]—that yields an intelligence surpassing all current human control or understanding. Because the capabilities of such a superintelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence.


en.wikipedia.org...

This is exactly what's happening in this paper. AI is doing something that humans can't fully comprehend. Here's what one of the Researchers said:

So far, the team says, it has devised experiments that humans were unlikely to have conceived. Some that work in ways that are difficult to understand. They look very different from human-devised experiments.

"I still find it quite difficult to understand intuitively what exactly is going on," said Krenn.


This is EXACTLY what the singularity is.



posted on Mar, 10 2016 @ 02:04 PM
link   

originally posted by: neoholographic
The Singularity is near!
...
What happens when Melvin's outcomes become too weird for humans to understand?
...



When the going gets weird the weird turn pro! (Much thanks to HST for the misuse of that quote)

I personally do not believe that there is a chance for this to become a general intelligent agent (AI) or a step towards singularity. They have already created AI that have derived Euclid's geometry back in 1980's and we are not all slaves to some vast AI. If anything, the research into AI pointed out how little we understand of the brain (cognitive intelligence). Yeah, Deep Blue, google, sure, they are hooking AI interfaces to vast data warehouses but that is not intelligence--it is only access to more data.

Show me a machine that can deduce, induct, and abstract thought and that will truly be an artificial intelligence.

Until then the only AI out there in the real world are the bots in Halo!



posted on Mar, 10 2016 @ 02:42 PM
link   
I think a lot of time and energy is wasted on trying to simulate neural networks when the obvious and overall goal is to emulate the results of a neural network. There are other ways to do that that are much more efficient than trying to duplicate an inefficient branching fractal system.



posted on Mar, 10 2016 @ 02:44 PM
link   

originally posted by: TEOTWAWKIAIFF
Until then the only AI out there in the real world are the bots in Halo!

I sometimes suspect that the research done for video game character development, coupled with that being done for self-driving car technology, might eventually (and perhaps accidentally) lead to a more sentient AI.



posted on Mar, 10 2016 @ 03:32 PM
link   
a reply to: neoholographic

I think the biggest "threat" from AI will be pure logic, because humans are flawed because of our emotions... we often put more heart into our actions than we do logic.
Imagine a hyper intelligent AI that comes to the conclusion that a logic step to ensure the survival of the majority of humanity is to kill of 30% of us...? That is logic, but emotionally it hurts to think that.

We will have machines making all the hard decisions for us, the question is wether we at all are ready for those decisions.



posted on Mar, 10 2016 @ 03:40 PM
link   
a reply to: TEOTWAWKIAIFF

Show me a machine that can deduce, induct, and abstract thought and that will truly be an artificial intelligence.

Again, you have to be aware of the latest research. Areas like machine intelligence and deep learning are growing at a rapid pace. This is why Google, IBM, Facebook and other agencies are spending billions of dollars.

For instance, Deep Mind created an intelligent algorithm that teches itself how to play video games and it DEDUCED the best way to defeat one of the games was to create a tunnel and the video shows this in real time.



You said:

If anything, the research into AI pointed out how little we understand of the brain (cognitive intelligence).

This is the biggest misconception in this area. People act like Artificial Intilligence isn't intelligence if it isn't exactly like human intelligence.

The reason Musk, Hawking and others are talking about this is because they've seen and read about the technology and it can look very different from the way human intelligence is expressed and it's still intelligence.
edit on 10-3-2016 by neoholographic because: (no reason given)

edit on 10-3-2016 by neoholographic because: (no reason given)



posted on Mar, 10 2016 @ 05:48 PM
link   
a reply to: neoholographic

I have followed this since before it was even called AI. I wanted to go to grad school and study this stuff. I saw the game of Go the other day and still was not impressed. Even the super cool neural net developed for UAVs and jet fighters is "that is neat shtuff" but not a reason to run to the hills. Deduction in computer programming languages has been around as long LISP has been around.

Musk? Didn't he think it was a good idea to blow up nuclear weapons on Mars to "warm it up" so we (humans) could inhabit it? If that is not the dumbest thing I 've heard in a while I do not know what is. Him, Hawking, N. D. Tyson, Brain Greene are all out to sell books and keep their name relevant and that is why I do not buy into it.

Don't get me wrong. I want this stuff (AI) to succeed. But like learning how to program a sprite to move across my computer screen so I could "shoot" it out of existence ruined all computer gaming for me, studying AI/neural nets/cognitive sciences and philosophy makes me very wary of claiming "it is here!" because of a single instance can be demonstrated. If the general instance is demonstrated I will most sincerely apologize and I will buy the first round (maybe the last!).

What is funny is an AI being used to suggest experiments to study quantum mechanics. Maybe the idea of quantum mechanics is flawed?



posted on Mar, 10 2016 @ 09:09 PM
link   
There is a reason why AI will not be able to relate to quantum mechanics the way humans can.



posted on Mar, 10 2016 @ 09:27 PM
link   
To be honest I think there is more to humankind that quantum mechanics can describe.




top topics



 
12

log in

join