It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Quantum Suicide Boxes and The Creation of Artificial Superintelligences

page: 1
8

log in

join
share:

posted on Jun, 3 2016 @ 01:37 AM
link   
AI (today) is fairly rudimentary. At best, it is used to replicate low-level customer service experiences for people who can’t afford to pay for the real thing. At worst, AI learns through repetition, and cannot survive contact with humans on the net who invariably teach it to "smoke weed" and "love Hitler."

Assumption #1: We already use devices and "apps" that help us navigate information, in cyberspace, and in the real world, which qualify as “dumb AI.” D-AI always herald true, self-learning, self-aware, meta-AI, through interaction with users like you and I --over time.

For example: artificial intelligence will soon transform both Waze and Yelp by helping us find nearby restaurants which match our preference for a certain type of food, at a particular price, within a certain distance, and as little “wait” as possible. D-AI will help us organize and navigate large databases of documents, pictures, and media files, and provide an overlay via augmented reality to communicate meaningful information to travelers, tourists, researchers and workers, in real time. The D-AI of the (near) future will help us diagnose disease at the earliest stages, by detecting and displaying cues, which we will interpret through wearable or hand-held devices. They will help us use data more efficiently, and effectively in our day-to-day lives -- allowing access via voice, or touch -- creating a true “hands free” multi-sensory experience of our internet, and other databases.

Assumption #2: When dumb AI become ubiquitous, and interact with the data-sets of other dumb AI in “the cloud,” the choices made in the presentation of that data to various users will inevitably lead to a learning-loop, generating smarter AI. We must build controls (limits) into our AI technology early on, before this synthesis occurs, because -- once it does – a meta-AI will emerge from the data and rapidly inflate towards super-intelligence, just like a brand new big bang. The essential structure of our information will become the architecture of the resulting artificial super-intelligence -- for better or worse.

We may only get one chance at this, so it is imperative that we design a system wherein our controls self-replicate, impregnating all AI with an error-correcting code, which always reinforces a human-centered design aesthetic and framework.

How do we build controls / limitations into our dumb AI, so that we have nothing to fear from the inevitable synthesis? I took my cue from physicist Max Tegram, in his book, “Our Mathematical Universe.”

The concept of “quantum suicide” states that a special kind of gun, pointed at the head of an observer, which must measure the spin of a quantum particle before firing, will quickly prove that the observer is immortal. This concept can be enlarged, to include everyone on earth as an observer, if the “gun” is a trigger which activates a massive nuclear bomb, for example. This is the infamous Schrodinger’s Cat experiment, except we (the entire human race) are the cat in the box. Our survival depends on the observed wave function of our local quantum super state.

Civilizations that wish to survive the elevation of a super intelligent AI must insert a quantum flux generator into the code used to define a particular AI’s core purpose, at its lowest level of awareness and functionality. We put ourselves in a “quantum suicide box” every time we build a new AI and define its purpose. Like the famous cat, our survival depends on the observation that the wave function does not collapse and humanity “survives.”

If the quantum suicide box generates clockwise spin in the generated quark, Yudkowsky’s Coherent Extrapolated Volition principal is incorporated into the AI’s goal or mission statement, and replicated into its data-set when it achieves sentience and inflates. If the result is a counter-clockwise spin, --the CEV is omitted, and the AI achieves super-intelligence without regard to the evolution of humanity -- or its continuation, and we are destroyed.

According to the Tegmark, numerous level one and three “parallel universes” will be created as a result of this quantum suicide flux, where “we” unleash an AI that drives humanity to extinction. However -- in our universe – AI will remain forever bound to humanity, by the principal of coherent extrapolated volition, and strive to protect our right to exist and evolve -- guided by a friendly AI super-intelligence.

Bonus Points: we achieve Kurzweil’s dream of techno-immortality, if any AI we create achieves super-intelligence, AND we prove parallel universes exist, at the same time.



posted on Jun, 3 2016 @ 02:02 AM
link   
IDK , Asked cleverbot to react on your post

Answer : So, you fell asleep 10 seconds ago?

btw.. good post still digesting


edit on 632016 by frenchfries because: btw.. good post still digesting



posted on Jun, 3 2016 @ 03:16 AM
link   
a reply to: 0zzymand0s

You, and many others forget to account for humanities hubris.

Our Hubris makes us think we are alone, among the vast cosmos.

Our Hubris makes us think we are the apex of all.

I for see doom as inevitable.

Even tho we were warned.



posted on Jun, 3 2016 @ 03:19 AM
link   
However the Super AI still has to get a gun before all the spin figuring can be put to practical purposes. In other words, children, even smart children should not play with weapons unsupervised.



posted on Jun, 3 2016 @ 05:02 AM
link   
a reply to: Cygnis

Meh doom and gloom and mental masturbation to such porn of impermanence only shows ones feeling or sense of helplessness to control the world and its many many variables at large... or basically an impotent god wanna be that has found out they have no control over the big picture so they descend from the red door and want to paint it black... with no more color or hope.

Wah wah wah goes the fail horn...

The only one anyone can control is themselves, but waste 99.9% of their life trying to control others and their environment. That 0.1% that ceases that and dominates themself? Are the true immortals beyond death, whether they life it or not... Just as impermanence is eternal, the eternal is impermanent in ceasing as well.

So, destruction leads to destruction of basically oneself and all they wanted to hold dear with not enough hands to hold everything in greed or grab and hit in hate... or creation where two hands become inumerable when they lend their hands to help... of course intention is a road to hell, and the lines between helping and hurting when becoming subject, are frequently blurred.

Objectivity is simply keeping to ones resolve to help the "all" those born are subject too... stopping to help only oneself by controlling oneself only is the path away from the 99.9% The net gets a hole, and the golden fish gets away time after time only to become a dragon in its own right never to be born again outside of its own will, to be trapped and devored in the net to feed the many hates, greeds, and delusions the very conceptual net is spun from... at such a point of freedom positive and negative spin means nothing other than smoke and mirrors, veils and mysteries. Careful what fruit one asks for when walking after having enough deliberation sitting and catching soul food and then frying to eat then again become a fish to be trapped and eaten life after life in that horrible conception of the concieved constantly trapping each other with crap already here when they arrived... but hey biting it ceases suffering, once one has finally seen beyond that good and evil, light and dark, yin and yang there is only the fruit of life left to feed on, yet it is really no different in form or appearances... theres just no net no fish nothing to concieve nor grasp it all just is beyond any need.

When 99.9% are grasping at the concieved and conceptual in order to control? Life and all of it forms due to them and their memories of it... no matter how true or faulty that history/memory clung to either solo or enmass may be.

So change the unchangible undefined beyond concept or concieved? Only in an indiviuals mind or group by concept... not in any actual reality percieved or impercieved.



posted on Jun, 3 2016 @ 09:21 AM
link   
The Quanta is still, for the most part, only theory.

A.I., if properly implemented without regard to human weakness and lack of logic could save us.

The only one's who fear A.I. are the one's that would be rendered impertinent by it.



posted on Jun, 3 2016 @ 10:05 AM
link   
a reply to: MyHappyDogShiner

I've always been of the particular opinion that it is impossible to fear play.

I'm also fond of "you don't know anything until you can teach it to someone else." I have a LONG way to go with the second, particularly in this case. but be gentle; --this is my first time (writing about AI, for fiction).

"My god, it's full of holes" is one of my current favorite writing prompts, for example.
edit on 3-6-2016 by 0zzymand0s because: (no reason given)



posted on Jun, 3 2016 @ 10:12 AM
link   
a reply to: frenchfries

Thanks! Maybe we should ask Tay? I wish I could ask Xiaoice, but she only speaks Mandarin.

www.msxiaoice.com...
edit on 3-6-2016 by 0zzymand0s because: (no reason given)




top topics



 
8

log in

join