OK, so this thread is going to be a doozy! There are some premises, which you can research yourself, that I will first outline.
1) Our universe exists as a simulation.
2) Because we have not yet detected any asteroids/comets/etc., I am assuming that we'll be in the clear in that dept., and won't be addressing such
scenarios.
Now I will reconcile a few theories about the universe.
Big Bang Singularity/Big Crunch Reconciled
Think about what we're told about the big bang. That everything spawned from an infinitesimally small point, a singularity. We are asked to accept
that there were no real preconditions/precursors to this point, without any regard to everything else we know about the nature of reality telling us
that this does not sound possible. The reason that the big bang was possible, is because we exist within a simulation.
The next thing I want to explain is the fabric of space. The all-encompassing nothing. True space is more of a concept than something tangible. This
is similar to the concept of zero (you can't have something of nothing). What is space really? It is simply the domain in which all tangible quarks
exist. If two quarks were very close to one another, the domain of space would be small. Two distant quarks exist in "greater space" because the
enclosing domain (space) is larger. The reason that the space appears to be expanding is simply because all that is tangible (the set of all quarks)
continues to drift apart since the big bang. Space is really no different in our reality than it is to the empty world space within the video games we
enjoy. Similarly in video games, every managed point always exists within some calculable enclosing domain (space).
How did this all start? For the programmers out there, basically like this:
for (quark q in quarks)
[
    q.setPosition(0.0, 0.0, 0.0);
]
Elegant, eh? Instant big-bang singularity. I think it's likely that an advanced simulation would loop (run this code again at the end of its life
duration (in Planck time,
en.wikipedia.org... )) with code like that.
More on Matter
Matter cannot be created or destroyed during the runtime of the simulation. All quarks are able to be interacted with by the simulation (whether by
the simulation directly, or indirectly through observers within the simulation), hence they are always tracked (loosely speaking). There is
technically a limit on the size of our universe. Candidate sizes can be calculated based on things I will go into below (I'm not doing this right now
though, it's not the primary point of the thread) using our world space precision (Planck lengths,
en.wikipedia.org... ) and
the native size of the hardware in which our simulation runs. Because matter cannot be destroyed during the runtime of our simulation, quarks/etc.
would most likely rebound off of the "edge" of space if space were to have reached its maximum-possible size before the sim ends. This is one possible
scenario, there are of course more possibilities, but this would be a simple possibility that would not muck with the integrity of everything away
from the edges of the simulation space. An analogy for this is when bullets would bounce off the edges of the screen in 80s-era arcade games.
Accelerating Space Expansion Problem
There are still many mysteries involving why space appears to be expanding at an accelerating rate. There are many possible reasons, two of which I
will quickly explain here.
1) Our local space exists similar to an innermost Matryoshka doll (nested doll):
i.imgur.com... , and only a subset of all tangible
matter is reset with each consecutive big bang, leaving outlying shells of matter far beyond our current happenings that are all pulling the innards
out at an accelerating rate (accelerating as the innards move out and get closer to the outlying matter). If this were true, true space would
encompass all of the matter in all remnant rings, but our local space (space observable by us, which would be a subset of true space) would appear to
be expanding in an accelerating fashion.
2) Space wraps, much like bullets wrap in the old-school arcade game Asteroids, and matter on one side of space is actually pulling at matter on what
appears to be the other side of space (hard to visualize, I know -- the exact shape of space is still a mystery), causing itself to expand at an
accelerating rate.
The accelerating universe bit isn't crucial to this thread, but I wanted to address it lightly.
In the Matryoshka dolls scenario the big-bang code for the universe could be something more along the lines of (which would introduce some nice
variation across iterations):
int a = (rand() % 4) + 1);
int b = (rand() % 4) + 1);
int c = (rand() % 4) + 1);
for (quark q in quarks)
[
    if ((a % b) == c)
    [
        q.setPosition(0.0, 0.0, 0.0);
    ]
]
This would result in a new big bang with a varying subset of all matter for each consecutive big bang.
OK, So Why Isn't Our Local Universe (Subset of True Universe) Ending in 2012?
The nature of the simulation behind our universe is as follows:
1) Fixed time step (Planck time time step as linked above) for elegance/reasons related to relativity.
2) I believe the simulation creates a new big bang when an elapsed_time variable maxes out.
Let's determine the amount of time that has passed since the last big bang in Plank time units.
13.75ish billion years * 365.242 days in a year *
23.934 hours in a day * 60 mins/hr * 60 seconds/min *
(1.851851852 * 10^43) Planck time units/second =
(4.32 * 10^17) seconds since the big bang *
(1.851851852 * 10^43) Planck time units/second =
(8.01323 * 10^60) Planck time ticks since the most recent big bang. Depending on how you calculate this you might even get something similar * 10^62.
The only relative part is the exponent, as the factor makes everything else negligible.
The point is that we are talking about something on the order of 10^60ish when talking about Planck time units since the big bang.
If the simulation were using a floating point value for the exponent (for the memory storing the elapsed Planck time units) and the exponent was
represented by 192 bits (random example), we would have had a reset after 2^192 (efficient computers are binary, not base 10) Planck time units, best
case. A 2^192 exponent for this value could only store an elapsed time on the order of 10^57.
You need around 200 bits for the exponent to get us where we are today (something * 10^60ish Planck time units in).
If we assuming that the hardware our universe simulation runs in is 256-bit native (native hardware sizes are always powers of two, so if we have
already blown 192 bits in the exponent, the next size up would be 256 bits), then that leaves 56 bits left (256 - 56) to store either a larger
exponent or a larger coefficient (the 8.01323 in 8.01323 * 10^17). Either of which mean our simulation has a lot longer to run before elapsed_time
hits its max value.
Basically, the universe would either be around a very large multiple longer than it already has been, or countless ORDERS longer than it has been
(longer than we can even fathom).
edit on 25-11-2012 by Matriculated because: (no reason given)