It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
You have read about entropy but you don't understand what you're talking about so you say these asinine things WITHOUT A SHRED OF EVIDENCE to support anything that you're saying.
Entropy as information content
Entropy is defined in the context of a probabilistic model. Independent fair coin flips have an entropy of 1 bit per flip. A source that always generates a long string of B's has an entropy of 0, since the next character will always be a 'B'.
The entropy rate of a data source means the average number of bits per symbol needed to encode it. Shannon's experiments with human predictors show an information rate between 0.6 and 1.3 bits per character in English;[10] the PPM compression algorithm can achieve a compression ratio of 1.5 bits per character in English text.
There's debates today as to why the universe began in a low entropy state.
You have it mixed up. You think a high level of entropy means there's a high level complexity and information. This is just EMBARRASSINGLY WRONG. A system that has a higher level of complexity and information is in a low entropy state as it's further away from equilibrium.
So there is absolutely nothing invalid about saying that a human has a higher level of entropy than an elementary article. Compared to the universe as a whole we may not have a high level of entropy but relative to quantum scale objects our level of entropy is almost infinitely larger.
First, everything in the video supports what I've said. You're just all over the place trying to find something coherent in the noise of your nonsense.
At the end of the day, you said:
The high entropy system acts as the observer and when it interacts with the lower entropy system it makes a measurement which forces that system to take on a more rigid definition (aka forces the wave function to collapse randomly), leading to a diminishing of the superposition effect as both systems merge into a single system.
No it's not incorrect because a system in a low entropy state is further away from equilibrium. This is just common sense when looking at statistical thermodynamics. A new deck of cards is in a lower entropy state vs. a deck of cards after a game of 52 pickup. So yes, a system further away from equilibrium is in a lower state of entropy and that's why I quoted Schrodinger from his book What is Life?
You have debates about collapse from M.I.T. to Harvard and I'm supposed to accept someone on a message board who doesn't provide any evidence and who says the silliest things when you say COLLAPSE CAN BE EASILY EXPLAINED.
A shuffled deck isn't more complex than an unshuffled deck. An unshuffled deck contains more available arrangements but it takes outside energy to put the unshuffled deck in an arrangement that increases complexity.
Take the words I went to the park.
Then you have this string of letters abcd...z
The words I went to the park have a lower state of entropy than a-z. A-Z contains more information because there's more available states that the letters a-z can be in but it takes energy to arrange the letter a-z in order to produce a novel and take the string of letters a-z and put them in a lower state of entropy.
You have it mixed up. You're equating available arrangements with complexity. Just because a system has more ways it can be arranged doesn't mean it's more complex. In fact, it means it's more random until it's arranged in a more ordered way and that takes energy.
Here's one of my favorite videos on Boltzmann brains that explains this beautifully.
In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity (also known as descriptive complexity, Kolmogorov–Chaitin complexity, algorithmic entropy, or program-size complexity) of an object, such as a piece of text, is a measure of the computability resources needed to specify the object. It is named after Andrey Kolmogorov, who first published on the subject in 1963.[1][2]
For example, consider the following two strings of 32 lowercase letters and digits:
abababababababababababababababab
4c1j5b2p0cv4w1x8rx2y39umgw5q85s7
The first string has a short English-language description, namely "ab 16 times", which consists of 11 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, which has 32 characters.
More formally, the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language (the sensitivity of complexity relative to the choice of description language is discussed below). It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings, like the abab example above, whose Kolmogorov complexity is small relative to the string's size are not considered to be complex.
It can be shown[14] that for the output of Markov information sources, Kolmogorov complexity is related to the entropy of the information source. More precisely, the Kolmogorov complexity of the output of a Markov information source, normalized by the length of the output, converges almost surely (as the length of the output goes to infinity) to the entropy of the source.
Dembski's proposed test is based on the Kolmogorov complexity of a pattern T that is exhibited by an event E that has occurred. Mathematically, E is a subset of Ω, the pattern T specifies a set of outcomes in Ω and E is a subset of T. Quoting Dembski[16]
Thus, the event E might be a die toss that lands six and T might be the composite event consisting of all die tosses that land on an even face.
Kolmogorov complexity provides a measure of the computational resources needed to specify a pattern (such as a DNA sequence or a sequence of alphabetic characters).[17] Given a pattern T, the number of other patterns may have Kolmogorov complexity no larger than that of T is denoted by φ(T). The number φ(T) thus provides a ranking of patterns from the simplest to the most complex. For example, for a pattern T which describes the bacterial flagellum, Dembski claims to obtain the upper bound φ(T) ≤ 1020.
Kolmogorov complexity is related to the entropy of the information source.
This simply supports everything I'm saying and has nothing to do with what you have said.
Kolmogorov complexity has nothing to do with your claim or a systems entropy like the human body. In fact most people say you can't apply it to the human body and there's ZERO EVIDENCE that it applies to a higher entropy object interacting with a lower entropy object and collapsing the wave function.
The soundness of Dembski's concept of specified complexity and the validity of arguments based on this concept are widely disputed. A frequent criticism (see Elsberry and Shallit) is that Dembski has used the terms "complexity", "information" and "improbability" interchangeably. These numbers measure properties of things of different types: Complexity measures how hard it is to describe an object (such as a bitstring), information measures how close to uniform a random probability distribution is and improbability measures how unlikely an event is given a probability distribution.
When Dembski's mathematical claims on specific complexity are interpreted to make them meaningful and conform to minimal standards of mathematical usage, they usually turn out to be false.[citation needed] Dembski often sidesteps these criticisms by responding that he is not "in the business of offering a strict mathematical proof for the inability of material mechanisms to generate specified complexity".[21] On page 150 of No Free Lunch he claims he can demonstrate his thesis mathematically: "In this section I will present an in-principle mathematical argument for why natural causes are incapable of generating complex specified information." Others have pointed out that a crucial calculation on page 297 of No Free Lunch is off by a factor of approximately 10^65.[22]
But if you have Scientific evidence like published papers or lab experiments to support this claim, I'm sure myself and every other Physicist in the world will want to see it so they can stop wasting time debating the issue.
Everything you said makes no sense.
Show me where most physicist say wave function collapse can be explained by decoherence.
Why decoherence solves the measurement problem (2013)
The solution of the quantum measurement problem, entirely within conventional quantum physics, has been published on at least four occasions (Scully, Shea, & McCullen, 1978; Scully, Englert, & Schwinger, 1989; Rinner & Werner, 2008; Hobson, 2013) . A similar solution has been presented by (Dieks, 1989; Dieks, 1994; Lombardi & Dieks), who propose it as a fundamental postulate that amounts to a new "modal interpretation" of quantum physics. Yet many articles in this and other journals continue to treat measurement as an unsolved fundamental problem whose resolution requires either exotic interpretations or fundamental alterations of quantum theory. For example, Adler (2003) has published an article titled "Why decoherence has not solved the measurement problem," despite the fact that, as will be reviewed below, decoherence has solved the measurement problem.
Measurement in Quantum Mechanics: Decoherence and the Pointer Basis (2012)
What, then, is the final verdict of the decoherence theory? Has it resolved the conceptual problems of quantum mechanics? There are many who believe that the conceptual problems of quantum mechanics are still unresolved and decoherence does not answer many issues. At the end of the day we can say that the decoherence explanation takes away some of the mystery from the idea of wave function collapse and provides a conventional mechanism to explain the appearance of a classical world. Many physicists find this a satisfactory explanation and there is no doubt that the experiments discussed clearly show how decoherence washes away quantum coherences providing a fairly convincing evidence for explaining the absence of Schödingers Cats in the real world. For all practical purposes, the decoherence explanation finds favour as a satisfactory settlement of the quantum measurement problem.
Decoherence, einselection, and the quantum origins of the classical (2003)
The manner in which states of some quantum systems become effectively classical is of great significance for the foundations of quantum physics, as well as for problems of practical interest such as quantum engineering. In the past two decades it has become increasingly clear that many (perhaps all) of the symptoms of classicality can be induced in quantum systems by their environments. Thus decoherence is caused by the interaction in which the environment in effect monitors certain observables of the system, destroying coherence between the pointer states corresponding to their eigenvalues.
Show me the SCIENTIFIC EVIDENCE that says Kolmogorov complexity has anything to do with high entropy objects interacting with low entropy objects and forcing the wave function to collapse.
Physicists from Griffith University have demonstrated “spooky action at a distance” for the first time with no efficiency loophole by splitting a single photon between two laboratories and experimentally testing if the choice of measurement in one lab really causes a change in the local quantum state in the other lab.
Physicists snatch a peep into quantum paradox
Measurement-induced collapse of quantum wavefunction captured in slow motion.