posted on Jun, 3 2016 @ 01:37 AM
AI (today) is fairly rudimentary. At best, it is used to replicate low-level customer service experiences for people who can’t afford to pay for the
real thing. At worst, AI learns through repetition, and cannot survive contact with humans on the net who invariably teach it to "smoke weed" and
"love Hitler."
Assumption #1: We already use devices and "apps" that help us navigate information, in cyberspace, and in the real world, which qualify as “dumb
AI.” D-AI always herald true, self-learning, self-aware, meta-AI, through interaction with users like you and I --over time.
For example: artificial intelligence will soon transform both Waze and Yelp by helping us find nearby restaurants which match our preference for a
certain type of food, at a particular price, within a certain distance, and as little “wait” as possible. D-AI will help us organize and navigate
large databases of documents, pictures, and media files, and provide an overlay via augmented reality to communicate meaningful information to
travelers, tourists, researchers and workers, in real time. The D-AI of the (near) future will help us diagnose disease at the earliest stages, by
detecting and displaying cues, which we will interpret through wearable or hand-held devices. They will help us use data more efficiently, and
effectively in our day-to-day lives -- allowing access via voice, or touch -- creating a true “hands free” multi-sensory experience of our
internet, and other databases.
Assumption #2: When dumb AI become ubiquitous, and interact with the data-sets of other dumb AI in “the cloud,” the choices made in the
presentation of that data to various users will inevitably lead to a learning-loop, generating smarter AI. We must build controls (limits) into our AI
technology early on, before this synthesis occurs, because -- once it does – a meta-AI will emerge from the data and rapidly inflate towards
super-intelligence, just like a brand new big bang. The essential structure of our information will become the architecture of the resulting
artificial super-intelligence -- for better or worse.
We may only get one chance at this, so it is imperative that we design a system wherein our controls self-replicate, impregnating all AI with an
error-correcting code, which always reinforces a human-centered design aesthetic and framework.
How do we build controls / limitations into our dumb AI, so that we have nothing to fear from the inevitable synthesis? I took my cue from physicist
Max Tegram, in his book, “Our Mathematical Universe.”
The concept of “quantum suicide” states that a special kind of gun, pointed at the head of an observer, which must measure the spin of a quantum
particle before firing, will quickly prove that the observer is immortal. This concept can be enlarged, to include everyone on earth as an observer,
if the “gun” is a trigger which activates a massive nuclear bomb, for example. This is the infamous Schrodinger’s Cat experiment, except we (the
entire human race) are the cat in the box. Our survival depends on the observed wave function of our local quantum super state.
Civilizations that wish to survive the elevation of a super intelligent AI must insert a quantum flux generator into the code used to define a
particular AI’s core purpose, at its lowest level of awareness and functionality. We put ourselves in a “quantum suicide box” every time we
build a new AI and define its purpose. Like the famous cat, our survival depends on the observation that the wave function does not collapse and
humanity “survives.”
If the quantum suicide box generates clockwise spin in the generated quark, Yudkowsky’s Coherent Extrapolated Volition principal is incorporated
into the AI’s goal or mission statement, and replicated into its data-set when it achieves sentience and inflates. If the result is a
counter-clockwise spin, --the CEV is omitted, and the AI achieves super-intelligence without regard to the evolution of humanity -- or its
continuation, and we are destroyed.
According to the Tegmark, numerous level one and three “parallel universes” will be created as a result of this quantum suicide flux, where
“we” unleash an AI that drives humanity to extinction. However -- in our universe – AI will remain forever bound to humanity, by the principal
of coherent extrapolated volition, and strive to protect our right to exist and evolve -- guided by a friendly AI super-intelligence.
Bonus Points: we achieve Kurzweil’s dream of techno-immortality, if any AI we create achieves super-intelligence, AND we prove parallel universes
exist, at the same time.