posted on Apr, 1 2010 @ 01:00 AM
Its been awhile to say the least. However, my absence has been of great value to me. The developement of my idea has been refined far beyond what I
thought it ever could be. To catch all the new members up (which I'm sure there are plenty) and remind all those that remember me, my work went
something along the lines of this:
The whole universe boils down to change and information involved in that change as opposed to space and time. Space has no properties and time is a
human unit derived to measure change. Information and how/why it changes is the essence of the universe.
My work is far from complete but I'll throw out a few ideas I've gathered since my absence as the feed back I will receive is most valuable. For
one, math has always been my weak point. So I set out to find an easier way to mathematically explain my work. Graphing 3-D functions is hard work but
I had a revelation that may change all that.
Instead of having a single 3D function with vagueness for certain values and overall uncertainty as a whole, why not break it down into overlapping 2D
functions that are scaled up like gears in a clock. Electron A moves here while electron B moves there influenced by quark movement C resulting in a
specified atom reacting this way in its local environment in sync with the rest of the universe.
While that sounds complicated, its really not. Quarks change more than S.A.P.s, S.A.P.s change more than the atoms they compose, the atoms that
compose a molecule change more than the net molecule thats formed, Molecules change more than the organisms or substance they compose, etc up to a
galactic and universal scale. So in "1 second", some parts of the whole have several almost innumeral number of changes while others may not have a
single change.
After overlapping 2D quark graphs obtain a certain value, electron and SAP graphs come into play all the way up to the entire universe. I was excited.
The movement of an entire galaxy could be put into context with one of its atoms accurately for the first time. But wait, theres an obvious problem
coming from the often wrong branch of quantum physics.
While I believe in a pre-deterministic shear particle model of the universe, quantum physics favors the uncertainty of a wave. While I could
extrapolate the 2D values of the graphs indefinitely, I had no way of knowing where the start value belonged on the smallest graphs. Then I looked at
a clock and evolved my theory even further. If you were to randomly guess the position of a second hand on a clock with your eyes closed, you have a
1/60 chance of being right. However, if while blindfolded you would have happened to hear the minute hand tick, you would know for a fact the second
hand was at 0.
How that relates is simple: Instead of guessing the values for the smaller faster graphs, simply observe more concrete larger graph movements that can
only be strongly influenced by behavior at a smaller scale. Thus you extrapolate backwards to fill in all your pre-requisite information concerning
the change you're observing then running it forward as a complete system. Total mathematical graphing that contains all change going on in the
universe. Finally, I've succeeded and all thats left is to crunch the numbers right? Not quite.
After all that work with a slightly older than two decade mammilian brain, I thought I was in the clear but alas, a snow storm proved to me otherwise.
As I watched the snow slowly fall to the ground in comparison to the actual change happening at the sub-atomic level, it hit me:
How in the hell am I supposed to make a computer program that can crunch that much information that fast? With a modern computer, It'd be the
equivalent of counting all the snow flakes, documenting their speed and where they fell with complete accuracy while also monitoring the shapes they
formed. I was lucky to have a couple dozen accounted for let alone the several million falling to the ground. Once again, the day was saved by the
enemy.
I was watching the sci-fi science episode where michio kaku is supposed to design a super adaptive robot. When he got to its neural processing, he
brought up quantum computers. For more or less useless information commentary, he said with the atoms in a sugar cube, you could essentially make a
quantum computer that exceeds all existing computers in the world linked up together working as one.
Contemplate that for a minute. Lets say there are a billion computers in the world all synchronized. They all type 10 letters per second. At face
value, that appears to be 10 billion bits of information per second but its actually far more. The conscious input to type the letters and the
knowledge of what letters are to be typed and why adds even more to the amount of information processed in just a single second every second. That can
be trumped by something the size of a single small cube of matter.
Hold a die up to really put that into perspective. Now compare that die vs how much matter is simply in the air around you. Big uncomprehendable
numbers in our favor, Its a nice change. There are a few problems however. Quantum computers use super chilled matter as opposed to matter that just
roams freely in the universe. It uses the "undetermined" states of an atom to compute its calculations via microwave. These "undetermined" states
are far more numerous in regular matter thus leaving the potential for even more calculating power.
So there's basically 4 objectives to clear inorder to get AMESA up and running:
1) Make a successful and accurate overlapping 2D mathematical graphing program for all scales of matter.
2) Find a way that an EM wave can interact with standard matter in a project relevant sense.
3) Find a way for this wave to carry all necessary diagnostic calculations.
4) Upon receiving all diagnostic caluculations and graphs, use that information as a blueprint on what changes need to occur on an information basis
fed back into an object to have the desired effect via a secondary wave.