Specific gravity, as it is the ratio of either densities or weights, is a dimensionless quantity. As an expression of relative mass or weight of equal volumes of sample and reference the specific gravity of the reference (water) is 1 (or 1000 in British brewing) if and only if the reference and sample temperatures are the same (see below). Substances with a specific gravity of 1 are neutrally buoyant, those with SG greater than one are denser than water, and so (ignoring surface tension effects) will sink in it, and those with an SG of less than one are less dense than water, and so will float. In scientific work the relationship of mass to volume is usually expressed directly in terms of the density (mass per unit volume) of the substance under study. It is in industry where specific gravity finds wide application, often for historical reasons.
Relative density can also help quantify the buoyancy of a substance in a fluid, or determine the density of an unknown substance from the known density of another. Relative density is often used by geologists and mineralogists to help determine the mineral content of a rock or other sample. Gemologists use it as an aid in the identification of gemstones. Water is preferred as the reference because measurements are then easy to carry out in the field (see below for examples of measurement methods).
As the principal use of specific gravity measurements in industry is determination of the concentrations of substances in aqueous solutions and these are found in tables of SG vs concentration it is extremely important that the analyst enter the table with the correct form of specific gravity
In analytical mathematics, Euler's Identity, named for the Swiss-German mathematician Leonhard Euler, is the equality
is Euler's number, the base of natural logarithms,
is the imaginary unit, which satisfies i2 = −1, and
is pi, the ratio of the circumference of a circle to its diameter.
it occours to me that in all einsteins feild equations
there is no mention of the medium density/expansion
or that the medium density in the solar system is part of the system
when we look at what forces are interacting with our planets we are excluding a repelling force
called medium density expansion
Finding the Virtual Velocity of Light, Solving the Mystery of the Failed Michelson-Morley Experiment
You pose a true and very disturbing problem, particularly when China's vastly growing needs for oil, gas, etc. are factored in. The demand and competition for cheap oil is steadily increasing, while the supply of CHEAP oil is steadily declining.
Sadly, the scientific "leadership" toward solutions has been rather totally misdirected.
Our policy makers and high government officials have already been advised that all that can be done is to:-
(1) vigorously pursue finding and drilling and exploiting more sources of oil and gas,
(2) develop and force the use of fuel cells and a "hydrogen" economy (never mind all the problems that cannot be solved prior to 2020 or even 2050),
(3) open up the building of new nuclear reactors again,
(4) continue to fund hot fusion research, even though in the nearly half century it's been ongoing it has not added a single extra watt to the power line, and will not do so for at least the next 50 years.
That's basically the "sage energy advice" given to our leaders from the scientific community.
The standard electrical engineering and classical Maxwell-Heaviside electrodynamics model does assume that all EM fields and potentials are generated freely from their associated source charges. But those models do not even model the active vacuum and its exchange, or local curved spacetime.
So the model being used for electrical power engineering is so inane as to boggle the mind: it implicitly assumes that every EM field,
every EM potential,
and every joule of EM energy in the universe
is and has been freely created out of nothing at all, by those source charges in the universe.
In short, the REAL "perpetual working machine with no energy input" advocates unwittingly are -- guess who -- the electrical engineering departments, professors, and textbooks, as well as the classical Maxwell-Heaviside electrodynamicists and their textbooks
This "source charge problem" more than a century old, has also been rather deliberately "scrubbed out" of the texts, so that the students will not know the problem exists, and that they have been taught a model that implicitly assumes the total falsification of energy conservation, by every charge in the universe.
You can simply separate some charge quickly, and with instruments preset up, watch the rapid advance at light speed of the appearance of the associated fields and potentials, radially outward at light speed. Also, once a potential or field suddenly "reads" at a point reached in that outflow of energy, it remains and IS SUSTAINED thereafter.
This shows that a continuous flow of real EM energy pours from that charge in all directions, continuously, from the moment of formation or separation of the charge.
Yet none of our instruments will or can detect any OBSERVABLE energy input to the source charge.
Either we have to totally surrender the conservation of energy law, as being violated by every charge, EM field, EM potential, and joule of EM energy in the universe,
we have to find a NONOBSERVABLE (virtual state) energy input to that source charge.
If we can find such a virtual state continuous energy input to the charge, then we also have to find the exact method or process whereby the disordered virtual energy absorbed by the charge from the vacuum, is coherently integrated into real, observable EM energy that is emitted as real photons outgoing in all directions.
We of course already published the solution to that source charge problem, as well as to the exact coherent integration mechanism. The source charge also does exhibit the continuous production of negative entropy, and all EM
energy in every circuit and device is indeed extracted directly from the local vacuum by the asymmetry of the source charge in its exchange with the vacuum flux.
Oddly, all the energy in a circuit or device comes directly from the local active vacuum exchange, NOT from cranking the shaft of the generator or dissipating chemical energy in a battery.
It turns out that the fundamental problems with the old electrodynamics are:-
(1) the assumption of an inert local vacuum (falsified in physics for some 70 or 80 years, and PARTICULARLY since the discovery of broken symmetry in 1957 and the prompt award of the Nobel Prize to Lee and Yang in December of
the same year, 1957),
(2) the assumption of a flat local spacetime (falsified by general relativity since 1916),
(3) failure to recognize that gauge freedom means clearly that the potential energy of the system can be freely changed at any time, which in turn implies that this already freely collected regauging energy can then be simply dissipated into the loads to power them freely,
(4) incorporation of the diabolical Lorentz symmetrical regauging just to make the equations amenable to solution, when the forced symmetry arbitrarily discards all Maxwellian systems which could and would exhibit use of excess energy from the vacuum,
(5) the ubiquitous use of the standard closed current loop circuit, which self-enforces symmetrical regauging in the system and thus enforces COP less than 1.0;
(6) still assuming the old material ether, falsified more than 100 years ago by the Michelson-Morley experiments. This latter is still incorporated by the totally invalid assumption of force fields in the vacuum;
(7) acquiring from the older mechanics the notion that a separate mass-free force operates in space on a mass to "forcibly move" it. Indeed, a mass-free entity does act on the mass, but the notion of a "mass-free force" is an oxymoron. Mass is a component of force, by the definition F = d/dt(mv). Both the expansion terms have mass as a COMPONENT. This is known and stated by many leading physicists, (Feynman, Wheeler, etc). but continues to be ignored and the equations are not changed accordingly;
(8) continued use of the 1872 Klein geometry and group theoretic methods, by which all negative entropy processes are excluded a priori and arbitrarily. The result is that broken symmetry at a given level loses the "information" and "ordering" at that level, and reduces the overall group symmetry. That excludes negative entropy, and continues to lead to the total farce of the present second law of thermodynamics, which has several limited violations experimentally proven, certain areas which do not obey the second law are also known, and the source charge violates it for any size level desired and for any length of time desired. What is required is the Leyton geometry and group symmetry methods. Under Leyton rules, a broken symmetry at a given level does not lose the information on ordering at that level, and it also generates a new symmetry at the next higher level -- a beautiful negentropic process!
I attach a short little fact paper on the Leyton geometry and its implications.
If we can just get the scientific community into Leyton's geometry and methods, and working with his hierarchies of symmetry, then the energy crisis can be solved fairly quickly. One can take all the energy one wishes from the seething local vacuum, once the methods are known and utilized.
Anyway, that is where it is all leading.
LEYTON’S HIERARCHIES OF SYMMETRY: SOLUTION TO THE MAJOR ASYMMETRY PROBLEM OF THERMODYNAMICS.
© T. E. Bearden, August 22, 2003; updated Dec. 7, 2003
THE PROBLEM: THERMODYNAMICS HAS A TEMPORAL ASYMMETRY PROBLEM, RECOGNIZED FOR A CENTURY, BECAUSE THE SECOND LAW EXCLUDES NEGATIVE ENTROPY PROCESSES AND NATURE DOES NOT.
· Starting with some controlled available system energy, the second law provides that, in subsequent interactions, the entropy S of a system can only remain the same or increase. Or, S is greaterthan or equal to 0, once the subsequent interactions start.
The recognized major problem in thermodynamics arises from the present Second Law. As Price states :
"A century or so ago, Ludwig Boltzmann and other physicists attempted to explain the temporal asymmetry of the second law of thermodynamics. …the hard-won lesson of that endeavor—a lesson still commonly misunderstood—was that the real puzzle of thermodynamics is not why entropy increases with time, but why it was ever so low in the first place."
This problem particularly arises in prevailing notions of the origin of the universe, whether “big bang” or “steady whimper”. A great deal of organization and energy came from somewhere or somehow, in a relatively short time cosmologically, to initially generate enormous negative entropy  shortly after the beginning.
If the energy of our observable universe somehow came from “outside” it (thus saving energy conservation), then it represented “loss” of available energy (positive entropy) to that outside source, and “negative entropy” to our gaining universe.
This suggests a possible clue to the solution: Look for a lower or “outside” broken symmetry generating a higher negative entropy (higher symmetry) across an interface between the outside source and our observable universe. We will find precisely this required characteristic in Leyton’s geometry and in his hierarchies of symmetry , as well as in the broken symmetry of particle physics.
Our observable state universe is separated by a quantum threshold interface from its associated virtual state vacuum. The vacuum has extraordinary virtual energy density and continuously exchanges energy with the observable state. This exchange in fact generates all observable forces of nature, in the modern physics view.
We are thus focused directly upon the disordered virtual energy of the vacuum, and some required process to coherently integrate virtual vacuum energy into observable energy, crossing the quantum threshold boundary—a negative entropy process.
Price also states :
"…the major task of an account of thermodynamic asymmetry is to explain why the universe as we find it is so far from thermodynamic equilibrium, and was even more so in the past."
A theoretical process for producing negative entropy will of course solve the problem, if a physical system continuously producing negative entropy by that process can also be exhibited experimentally. Leyton provides the process , and every charge in the universe is already just such a required physical system obeying it .
We have given the exact mechanism by which the charge coherently integrates virtual photon energy absorbed from the vacuum, into observable photons , which are re-emitted in all directions as real EM energy without any observable EM energy input. The charge continuously consumes positive entropy of the virtual state vacuum, and produces negative entropy at the next higher level, the observable state.
FACTS BEARING ON THE PROBLEM: CHARACTERISTICS OF THE SECOND LAW AND ITS EXPERIMENTAL FALSIFICATION.
· The Second Law is statistical, so it need not apply to just a few involved entities where statistical analysis is inapplicable. This is the “small number” violation of the Second Law, which is well-known.
· The Second Law is also violated in statistical fluctuations , where usual entropic reactions may run backwards and produce negative entropy for a time.
· Wang et al.  have experimentally shown such fluctuation violations in chemical solutions at cubic micron level and for up to two seconds. In water, e.g., a cubic micron contains some 30 billion ions and molecules. In that ensemble, negative entropy reactions can occur for at least two seconds and sometimes longer.
· Evans and Rondoni  showed that systems continuously producing negative entropy are possible in theory. Startled, they felt that real physical systems could not exhibit such behavior. However, every charge does it , as we have pointed out.
· Our proposed solution to the source charge problem  is based on the accepted quantum field theory view of the charge and its vacuum polarization as a special dipolar ensemble. The bare charge in the middle is surrounded by virtual charges of opposite sign in the polarized vacuum. Both charges are infinite, but their difference is finite and is the textbook value of the “classical charge”—what the external observer sees or measures of the internal bare charge through its external screening charge.
o The ensemble exhibits the known broken symmetry of opposite charges.
o The charge ensemble thus continuously absorbs virtual photons from the seething vacuum, and pours out real photons in all directions, establishing and continuously replenishing its fields and potentials, spreading outward at light speed.
o The fields and potentials are deterministically ordered as a function of radial distance.
· This process produces ordered macroscopic energy from the vacuum’s disordered virtual energy flux. Thermodynamically the charge is a nonequilibrium steady state (NESS) system, continuously fed by vacuum energy, and continuously performing work to transduce the form of the absorbed virtual energy into emitted observable energy. It is also a deterministic process, since the emitted photon energy intensity is ordered with respect to radial distance from the source charge.
· Every charge continuously produces negative entropy as shown theoretically possible by Evans and Rondoni  for deterministic NESS systems.
· Simply regauging a system—ubiquitously permitted and used in gauge field theory and by every electrodynamicist—also totally violates the second law at any size level and for any period of time. This follows since the potential—and therefore the ordered, available potential energy—of an EM system can be freely changed at will.
o As an example, voltage amplification without current flow is work-free and involves only energy transfer in the same form. We strongly stress that work is the changing of the form of some energy, not the changing of its magnitude.
o The present second law is falsified by the gauge freedom axiom of gauge field theory. Regauging is a negative entropy operation, where the sysem’s available potential energy can be freely increased (or decreased) at will.
· The Second Law is already an approximate and “very leaky” law at best, and in some cases it is demonstrably wrong . In the case of regauging, the freely altered potential of the sytem may extend outward in space to infinity. Hence regauging can apply to any macroscopic size and time duration desired.This is a total violation of the second law of thermodynamics, and it also is experimentally verified.
· Since the Second Law excludes negative entropy, and negative entropy processes and systems do experimentally exist, we conclude that the present form of the Second Law is experimentally falsified.
· The task, then, is to correct the Second Law to permit the production of either positive or negative entropy. For this general correction, we require Leyton’s object-oriented geometry and his hierarchy of symmetries .
MORE FACTS: LEYTON’S NEW OBJECT-ORIENTED GEOMETRY AND HIERARCHIES OF SYMMETRY VS. THE OLDER KLEIN GEOMETRY.
· Since 1872, much of physics and most of thermodynamics are based on Felix Klein’s geometry [9,10] and his group theoretic methods. Leyton’s object-oriented geometry and advanced group theoretic methods  include Klein geometry as a subset.
· In Klein’s geometry and more limited group methods, a broken symmetry at a given level loses the symmetry information and reduces the overall group symmetry.
· In Leyton’s geometry and with his more advanced group methods, a broken symmetry at a given level generates a new symmetry at the next higher level. The information of the lower level is also retained and not lost. Hence in Leyton’s approach, a broken symmetry retains the symmetry information of that level, creates an additional higher symmetry, and increases the overall group symmetry.
· This automatic generation of a higher symmetry by a lower level broken symmetry is the Leyton effect. At the new higher level, symmetry can then be broken to again generate a yet higher level symmetry. And so on. Hence the Leyton effect generates a hierarchy of symmetries, increased and knit together by broken symmetries.
· The Leyton effect is a general negative entropy process. It converts disordered energy at one level into ordered energy at the next level.
· The Leyton effect and resulting hierarchies of symmetry, knit together by negative entropy processes, falsify the present Second Law of thermodynamics. Hence the Second Law must be revised to include the Leyton effect (negative entropy, negative entropy processes, and negative entropy-producing systems).
THE RESULTING SOLUTION: ADOPT LEYTON’S MORE ADVANCED OBJECT-ORIENTED GEOMETRY, APPLY LEYTON’S HIERARCHIES OF SYMMETRY, AND REWRITE THE SECOND LAW OF THERMODYNAMICS.
· The present Second Law of thermodynamics can be stated as:
“Given some available controlled order (available controlled energy), this initial controlled order will either remain the same or be progressively disordered and decontrolled over time by subsequent entropic interactions.” Or, simply put, dS/dt is greater than or equal to 0.
· In accord with Leyton’s geometry and methods, the revised Second Law can be stated as:
"First a negative entropy interaction occurs to produce some controlled order (available controlled energy). Then that initial available controlled order will either remain the same or be progressively disordered and decontrolled by subsequent entropic interactions over time, unless additional negative entropy interactions occur and intervene." Or, simply put, - ∞ ≤ dS / dt ≤ + ∞
· The revised Second Law is now consistent with experiment [6,7,13], with the source charge solution , with theoretical proof that negative entropy producing systems can exist , and with Leyton’s geometry and hierarchies of symmetry . It is inconsistent with Klein geometry [9,10] and Klein group methods, but these are only a subset of the Leyton geometry and thus cannot limit the Leyton effect and Leyton’s hierarchies of symmetry.
· In addition, the new second law statement resolves the asymmetry problem of thermodynamics [1,4], given that negative entropy processes occurred at or during the formation of the present universe. And the evidence supports it.
IMPLICATIONS: WE ARGUE THAT:
· This adoption of Leyton’s geometry and his group theoretic methods  heralds a new revolution in physics, electrodynamics, thermodynamics, and chemistry.
· The coming revolution will be as profound as was the prediction of broken symmetry by Lee and Yang , and its quick experimental proof by Wu et al. .
· Leyton’s geometry and methods have already been very successfully applied to robotics and pattern recognition. They work in many cases where the Klein geometry and methods fail.
· The Leyton change now thermodynamically prescribes EM power systems such as the source charge, which freely extract useful and observable EM energy from the virtual energy of the vacuum.
· The new approach thus leads directly to a great new re-examination of the present theory of electrical power systems. EM systems that violate the present thermodynamics are quite possible and several areas where the processes violate the second law are already known [6,7,13,14]. The Bohren-type experiment [14,15] involving negative resonance absorption of the medium provides a well-known and experimentally proven, replicable process in which 18 times as much energy is re-emitted by the absorbing medium as is directly input to it by Poynting energy flow calculations. The excess input energy, of course, comes from the local active vacuum environment.
· The new approach leads toward the rapid development of negentropic engineering , contrasted to the present highly wasteful and polluting positive entropy engineering universally used in electrical power systems because of the present limited Second Law of thermodynamics and the use of Klein geometry and group theoretic methods.
1. Huy Price, Time's Arrow and Archimedes' Point, Oxford University Press, 1996, paperback 1997, p. 78.
2. Boltzman’s suggestion was that the world is simply a product of a chance fluctuation into a state of very low entropy. Little or no real progress has been made on the problem since then.
3. Michael Leyton, A Generative Theory of Shape, Springer-Verlag, Berlin, 2001.
4. Price, ibid., p. 36.
5. See (a) T. E. Bearden, "Giant Negentropy from the Common Dipole," Proc. Congr. 2000, St. Petersburg, Russia, Vol. 1, July 2000 , p. 86-98; also published in J. New Energy, 5(1), Summer 2000, p. 11-23. (b) — Fact Sheet, “The Source Charge Problem: Its Solution and Implications,” Aug. 18, 2003. In this fact sheet, we give the exact physical mechanism that coherently integrates absorbed totally disordered virtual photon energy into real observable photons. The energy of each absorbed virtual photon is changed to a differential change of the mass m of the absorbing charge(s). Iterative changes thus algebraically sum since mass is unitary. When the mass differential reaches sufficient magnitude to constitute the energy for an observable photon, the zitterbewegung of the vacuum causes expulsion of an observable photon, decaying the mass back down to base level to start the process again. We have nominated the source charge (using this mechanism) as the first known physical system that continuously produces negative entropy, along the lines theoretically predicted by Evans and Rondoni . We have also nominated the coherent summation by the mass of the charge as the first known physical mechanism continuously producing negative entropy and thus falsifying the present form of the Second Law. See also (c) M. W. Evans, T. E. Bearden, and A. Labounsky, "The Most General Form of the Vector Potential in Electrodynamics," Found. Phys. Lett., 15(3), June 2002, p. 245-261.
6. A most useful and rigorous transient fluctuation theorem is given by D. J. Evans and D. J. Searles, "Equilibrium microstates which generate second law violating steady states," Phys. Rev. E, Vol. 50, 1994, p. 1645-1648.
7. G. M. Wang, E. M. Sevick, Emil Mittag, Debra J. Searles, and Denis J. Evans, "Experimental Demonstration of Violations of the Second Law of Thermodynamics for Small Systems and Short Time Scales," Phys. Rev. Lett., 89(5), 29 July 2002, 050601.
8. D. J. Evans and Lamberto Rondoni, "Comments on the Entropy of Nonequilibrium Steady States," J. Stat. Phys., 109(3-4), Nov. 2002, p. 895-920.
9. Felix Klein, "Vergleichende Betrachtungen über neuere geometrische Forschungen," 1872. Klein's Erlanger program was initiated in 1872 to describe geometric structures in terms of their automorphism groups. It has driven much of the physics development in the twentieth century.
10. Also see I. M. Yaglom, Felix Klein and Sophus Lie: Evolution of the Idea of Symmetry in the Nineteenth Century, Birkhäuser, Boston, MA, 1988.
11. (a) T. D. Lee, "Question of Parity Conservation in Weak Interactions," Phys. Rev., 104(1), Oct. 1, 1956, p. 254-259; (errata in Phys. Rev. 106(6), June 15, 1957, p. 1371); (b) T. D. Lee, Reinhard Oehme, and C. N. Yang, "Remarks on Possible Noninvariance under Time Reversal and Charge Conjugation," Phys. Rev., 106(2), 1957, p. 340-345.
12. C. S. Wu et al., "Experimental Test of Parity Conservation in Beta Decay," Phys. Rev., Vol. 105, 1957, p. 1413.
13. Dilip Kondepudi and Ilya Prigogine, Modern Thermodynamics: From Heat Engines to Dissipative Structures, Wiley, Chichester, 1998, reprinted 1999 with corrections, p. 459. Present thermodynamics is violated in rarefied media where local equilibrium fails, in strong gradients (about which little is known, either theoretically or experimentally), and in long-lasting memory effects occurring in materials and in many nonequilibrium processes. Other violations due to fluctuations are shown by Wang et al. (cited above).
14. (a) Craig F. Bohren, “How can a particle absorb more than the light incident on it?” Am. J. Phys., 51(4), Apr. 1983, p. 323-327. Metallic particles at ultraviolet frequencies are one class of such particles and insulating particles at infrared frequencies are another. See also (b) H. Paul and R. Fischer, “[Comment on “How can a particle absorb more than the light incident on it?’],” Am J. Phys., ibid. Scientists in the area just use a change of reaction cross section. EM field and potential intensities are defined in terms of a “static unit point charge’s” scattering. If the same charge is resonant , it scatters more energy, much like a strongly churning rock on the bottom of a river displaces more water than does the same rock fixed on the bottom. Thermodynamically, that gives COP Greater Than 1.0—in fact, COP = 18.
15. Scientists in the area do not speak of the thermodynamic coefficient of performance of their experiments. Instead, they speak of the change in reaction cross section. EM field and potential intensities are defined in terms of a “static unit point charge’s” reaction cross section and resulting diverting of energy from the energy flows comprising the field or potential. If the same charge is resonant at the input light energy frequencies, its reaction cross section changes and it “collects” or diverts more energy than when static. The situation is roughly analogous to the diverting of a river’s flow around a rock that is being churned violently back and forth at right angles to the flow, as compared to the diversion of water flow around the same rock when the rock is fixed stationary. Thermodynamically, the free 18-fold increase in reaction cross section gives COP Greater Than 1.0—in fact, in that case COP = 18.
16. A close colleague and I are just filing a rather formidable patent application for possibly the first practical negative entropy engineering application of electromagnetic circuits. The circuitry is employed in an entirely different manner from that prescribed in the textbooks. The invention coverts seemingly ordinary components and sections into true negative resistors freely receiving excess EM energy from their active external vacuum environment. This results in the output of much more EM energy than the operator himself inputs—by evoking free asymmetrical regauging which increases the potential energy circulating in the circuit and then dissipated in the loads to power them. Conservation of energy is obeyed at all times, since the excess energy is freely input from the active environment, much like a common home heat pump process. Negative entropy processes are quite real in nature, and they can also be evoked and utilized in otherwise somewhat ordinary electrical circuits. In short, Leyton’s epochal work has very practical application, and we expect to see it vigorously applied to electrical power systems in the future.
“Einstein's Relativity Error
“The physical sciences in 1873 seemed to once again take on an air of stability as James Clerk Maxwell published his, 'Treatise on Electricity and Magnetism.'
In this paper, he discussed electricity, magnetism, and electromagnetism as functions of waves in a fluid space (ether). His theory held popular support until the year 1887 when the two U.S. physicists AA Mitchelson and Edward W Morley performed their historic experiment with light. Their experiment (The Mitchelson-Morley experiment) was designed to use light as a means to determine if space were a 'fluid' as Maxwell's equations had assumed.
The M-M test results, however, appeared to deny the existence of fluid (or ether) space. To explain the 'apparent' failure of the M-M test to detect the ether, Hendrik Lorentz and George Fitzgerald developed their now famous 'transforms' (The Lorentz-Fitzgerald Transforms - 1902) in which length contractions, mass increase and time lag were offered as explanation for the negative test result. Note that the Lorentz - Fitzgerald transforms still treated space as an inertial fluid, one undetectable by known technology.
Einstein, who first began the formulation of his special theory of relativity in 1895, published it in 1905. He seized upon the Lorentz -Fitzgerald transforms and the M-M test results as evidence of a universal axiom: The velocity of light is (to the observer) the limit measurable velocity in the universe, (this does not mean it is the limit velocity in the universe however).
The discipline details
Einstein was faced with an apparent paradox, as to the nature of space. It behaved like a fluid in many ways - yet in others it behaved like an abstract, ten-component Ricci Tensor from the Reimannian model of the Universe. The failure of the M-M test to detect an ether was the final straw. Yet, hard as he tried, Einstein failed to remove the ether from E=MC^2.
The following discussion should illustrate this point.
Diagram One above is a schematic of the M-M test. It was conducted on the basis that if an ether existed, the earth would be moving "through" it. Hence there would be a relative velocity between earth and the fluid of space.
It was reasoned that by splitting a beam of light (F) into two parts; sending one out and back in line with the direction of the earth's orbital path, (to mirror A) from Half silvered mirror (G) and glass plate (D); and recombining the two beams in the interferometer (E) one should be able to detect a shift in the phases of the two beams relative to one another.
This shift could accurately be predicted by knowing the velocity of light (c)
And the velocity (Ve) of Earth through orbital space. Their reasoning was as follows (refer diag. 1, diag. 2a, daig, 2b):
c2 = a2 + b2C = velocity of light = velocity from G to B by fixed extra-terrestrial observer
S = distance GA = GB
T1 = go-return time in-line (GA - AG)
T2 = go return time at right angles (GB-BG)
T = .5 t T2
V1= apparent velocity from g to B by earth observer.
Then the time (T1) is determined by:[s/(c-ve)] + [s/(c+ve))] = t1 which reduces to:
(Eq.1) 2sc/(c2 - ve2) = t1
Also, the time (t2) is determined by first solving for (v1) in terms of ( c ) and (Ve) using the Pythagorean Theorem (c2 = a2 + b2)…. Or, in this instance: (G to B)2 = (G to M)2 + (M to B)2
By substitution, c2 = ve2 + v12
(Eq.2) v1= (c2 - ve2).5
Now, solving for the time (t) - which is the same over GM, GB, MB - of the GB trip by substituting s/t = v1 in (Eq.2) , one obtains:
(Eq.3) s/t = (c2 - ve2).5
(Eq.3) t = s/(c2 - ve2).5
Substituting: t = .5t2
Gives: t2/2=s/(c2 - ve2).5
(Eq.4) t2= 2s /(c2 - ve2).5
by comparing the ratio of the in-line go-return time (t1) to the right angle go-return time (t2) one obtains:
(Eq.5) t1/t2 =[2sc / (c2 - ve2).5 / 2s
which reduces to:
(Eq. 5.) t1/t2 = (1- ve2 / c2 ) - .5
Now then, if the light source is at rest with respect to the other, one sees:
(Eq 6.) ve = 0
(Eq 7.) t1/t2 = 1/ (1 -0).5 = 1/1 = 1
Such a ratio as (Eq. 7) shows is exactly what each successive try of the linear M - M test has obtained…. (notice: Linear not angular!). Lorentz and Fitzgerald knew there had to be an ether; so they developed their well known transforms - an act which was in essence a way of saying, there has to be an ether…we'll adjust our observed results by a factor which will bring our hypothetical expectations and our test results into accord….
Their whole transform was based on the existence of ether space! Their transform, in essence said that length shortened, mass flattened, and time dilated as a body moved through the ether.
Einstein came along in 1905 saying the Mitchellson Morley test showed the velocity of light to be a universal constant to the observer. Seizing upon this and the Lorentz-Fitzgerald transforms, Einstein was able to formulate his Special Relativity which resulted in the now famous E = Mc2 …the derivation of which follows:
Starting with (Eq.5) t1/t2 = (1- ve2 / c2 ) - .5
The Lorentz-Fitzgerald transform factor for (Eq.5) becomes (1- ve2 / c2 ) - .5
(to bring t2= t1) giving t1/t2 an observed value of (1).
Assuming Lorentz and Fitzgerald's supposition to be correct one should look at mass-in-motion as the observer on the mass see's it versus mass-in-motion as the universal observer sees it,…
Let m1 = mass as it appears to the riding observer
Let v1 = velocity as detected by rider
Let m2 = mass as universal observer sees it
Let v2 = velocity as universal observer sees it
Then it follows (from Lorentz and Fitzgerald) that:
(Eq. 9) m1 v1 not = m2 v2
So - to equate the two products. Lorentz and Fitzgerald devised their transform factor (1- ve2 / c2 ) - .5 which would bring m1 v1 = m2 v2 to either observer,… yielding the following extension
(Eq. 10) m1s1/t1 Not = m2s2/t1
(Eq. 10) m1s1 Not = m2s2
then, by substitution of the transform factor s2 = s1(1- ve2 / c2 ) - .5(assuming time is reference) into (Eq. 10.) one obtains: m1s1 = m2s1(1- ve2 / c2 ) - .5
which reduces to:
(Eq. 11) m1 = m2 (1- ve2 / c2 ) - .5
To re evaluate this relative change in mass, one should investigate the expanded form of the transform factor (1- ve2 / c2 ) - .5 (which transforms t1=t2) It is of the general binomial type:
(Eq. 12) (1- b) -a
Hence it can be expressed as the sum of an infinite series:
(Eq. 13) 1 + ab = a(a+1)b2 /2! + a(a+1)(a+2)b3/3! + …etc
where b2 is less than 1
So - setting a = .5 and b = ve2 / c2
(Eq. 14) 1 + (ve2 / 2c2) + (3v4/8c4) + (5v6/16c6) + etc…
For low velocities in the order of .25c and less than the evaluation of (1- ve2 / c2 ) - .5
Is closely approximated by, the first two elements of (Eq. 14):
(Eq. 15) (1- ve2 / c2 ) - .5= 1+ve2 /2c2
so (Eq. 11) becomes:
(Eq. 16.) m2= m1(1+ ve2 / c2)…where ve less than .25c
developing further,… m2= m1 + m1 ve2 /2c2
(Eq. 17) m2 - m1 = .5 m1 ve2 /2c2
remembering energy (E) is represented by:
(Eq. 18) E = .5mv2…( where ve less than .25c)
One can substitute (Eq. 18) into (Eq. 17) giving…
(Eq. 19) m2 - m1 = E/c2…(assuming ve = v)
Representing the change in mass (m2 - m1) by M gives:
(Eq. 20) M = E/ c2
Or, in the more familiar form using the general (m) for (M):
(Eq. 21) E = m c2
(Note, however, that (Eq. 14) should be used for the greatest accuracy - especially where ve is greater than .25c)
Looking at the assumption in (Eq. 19)…( ve ) was the term used in the beginning to represent the ether wind velocity… This means Einstein used fluid space as a basis for special relativity. His failing was in declaring the velocity of light an observable limit to the velocity of any mass when it should only have been the limit to any observable electromagnetic wave velocity in the ether . The velocity of light is only a limit velocity in the fluid of space where it is being observed. If the energy density of space is greater or less in another part of space, then the relativistic velocity of light will pass up and down through the reference light wave velocity limit - if such exists.
Do not fall into the trap of assuming that this fluid space cannot have varying energy-density Perhaps the reader is this very moment saying, an incompressible fluid space does not allow concentrations of energy - but he is wrong - dead wrong!
When a fixed density fluid is set in harmonic motion about a point or centre, the number of masses passing a fixed reference point per unit time can be observed as increased mass (or concentrated energy). Although the density (mass per volume) is constant, the mass velocity product yeilds the illusion of more mass per volume per time. Space is an incompressible fluid of varying energy density…in this authors opinion!
The apparent absurdity of infinitely- increasing - mass and infinitely decreasing length as a mass approaches the light wave velocity is rationalized by realizing that space has inertia and as such offers inertial resistance to the moving mass. The energy of the moving mass is transmitted in front of it into the medium of space. The resulting curl of inertial resistance increases as negative momentum to the extent the mass is converted to radiant energy as it meets it’s own reflected mass in resistance. However - to the Star Trek fans, take heart… just as man broke the sound velocity limit (sound barrier) he can also break the light velocity limit (light barrier). By projecting a high-density polarized field of resonating electrons to spoil or warp the pressure wave of the inertial curl, the hyper-light craft can slip through the warp opening before it closes, - emitting the characteristics of a shock wave. Such a spoiler would be formed by using the electro-dynamic, high-energy-density electron waves which would normally proceed before the hyper-light craft, as a primary function of propulsion. When a similar function is executed by hypersonic aircraft, a sonic boom is formed as the as the inertial curl collapses on itself. In space, the light velocity equivalent to this sonic boom would be in the form of Cherenkov radiation which is emitted as a mass crosses the light-velocity threshold sending tangential light to the direction of travel.
Ether Existence Verified.
In 1913, the rotational version of the linear M - M experiment was successfully performed by G Sagnac (see p 65 - 67 of The Physical Foundations of General Relativity by D.W. Sciama, Heineman Educational Books Ltd., 48 Charles St., London WIX8AH) In 1925 Mitchellson and Gale used the spinning earth as their rotational analogue to the linear M - M experiment. It also showed successfully that the velocity of light sent in the direction of spin around the perimeter of a spinning disc (or of the surface of the earth) varied from the velocity of the light sent against the spin. (Refer diagram 3 Below).
The error of the M-M experiment is the test results are also valid for the case where there is an ether and it, too, is moving along with the same relative velocity and orbit as Earth maintains around the Sun.
The Tea Cup Analogy can be used to explain the error.
If one stirs a cup of tea which has some small tea leaves floating on it's surface, (obviously before the invention of the ubiquitous tea bag!) one notices some of these tea leaves orbiting the vortex in the centre of the cup. The leaves closer to the centre travel faster than those father from the centre (both in linear and angular velocity).
Now, one must imagine oneself greatly reduced in size and sitting upon one of these orbiting leaves. If one were to put his hands over the edge of his tea leaf on any side, would he feel any tea moving past?…No! The reason is that the motion of the tea is the force that has caused the velocity of the leaf. One could not detect any motion, if both himself and the tea were travelling in the same direction and the same velocity. However, If one had arms long enough to stick a hand in the tea closer to either the centre or the rim of the cup - where the velocities were different to his own then he would feel tea moving faster or slower than himself (respectively).
Also, if one were to spin his tea leaf at the same time as it orbits about the centre, placing his hands into the tea immediately surrounding his leaf would show inertial resistance against the spin moment of his leaf.
Solar Tea Cup
In the preceding analogy, the centre of the spinning tea (or vortex centre) represented the sun, the leaf: the earth; The tea: The ether; and the riders hands: the light beams of the M - M test. In essence, what Mitchellson, Morley, Einstein and many other scientists have said is that the M - M test showed the volocity of light was not affected by the earth's orbital motion.
"Therefore" they have said, "we have one of two conclusions to draw";
1. ) The Earth is orbiting the sun and there is no ether, or,
2. ) The Earth is not orbiting the sun and there is an ether but since the earth is not moving through the ether, the ether "wind" cannot be detected. Obviously, this conclusion is negated by the Earth's observed helio centric orbit.
However, their reasoning should also have incorporated a THIRD option.
3) The Earth is orbiting the sun…and so is the ether; therefore, no ether wind could be detected in the orbital vector immediately in the vicinity of Earth.
In other words, the test results cannot prove or disprove the existence of an ether…only whether or not the earth is moving relative to the ether!
C Not Constant
Remember, in 1913, G Sagnac performed his version of the M-M experiment and corrected the inconclusive results which Mitchellson and Morley's test had obtained. In Sagnac's rotational analogue of the M-M test the velocity of light was shown to vary. Aalso in 1925, Mitchellson and Gale verified Sagnac's results with their own rotational analogue. Even more recently, similar verification has been made using a ring-laser system to detect the rotational velocity of the Earth, relative to the ether,
Relativists Discard Evidence
By the time the ether wind was proven to exist, Einstein's theories were already winning strong support on the merits of celestial observations which closely agreed with Einstein's predicted values. As a result the scientific community decided to explain the ether wind phenomenon as a result of Earth's spinning in it's own ether blanket which Earth was apparently dragging through space. No explanation was ever agreed upon as to the origin or extent of this ether blanket. It was simply a way to sweep a discrepancy under the carpet.
Einstein Admits Error.
In a biography written just before his death, Professor Einstein, is quoted as admitting he had a fundamental error in Relativity. It was he said, one which-when corrected-will explain how light - an obvious wave form - can be propagated across an apparently non-inertial space. Einstein also stated that the discovery of the solution to this error would probably be the result of some serendipitous discovery in the 1960's.
However, before he died, Einstein did manage to partially correct his error, With the help of the well known Dr Erwin Schrodinger, Dr Einstein, was able to construct a 'total theory' for existence. It was called the "Unified Field Theory". Although Dr Einstein was able to lay the basic framework before his death, it is reasonably certain that a more readily useable version of the "Unified Field Theory" was only completed by other physicists after Einstein had died.
One of the more promising contributions toward a useable unified field theory was offered by Dr Stanley Deser and Dr. Richard Arnowitt. They took the General Theory of Relativity which Einstein had devised and constructed a "bridge" or "creation tensor" to link the energy of nuclear fields with that of gravitational fields by co-variant matrices. The basic relationship of General Relativity which they used as a basis for their system is:
Ruv- .5guvR = 8(pi)kTuv
Ruv = Ricci's ten-component sub-Riemannian space, curvature tensor
guv = the metric tensor
R = the selected Ricci scalar components
K = a universal constant: proportional to Newton's gravitational constant
Pi = the usual constant 3.14etc
Tuv = the components (potentials) of the energy stress tensor
Although Deser and Arnowitt's proposed equations were quite difficult to work with, it is rumored that subsequent linear variations have been developed - allowing major leaps in science and technology to develop.
When the correctly formulated Unified Field Theory is finally released to the public it wil be recognised quite easily; for it will have explained why the proton is exactly 1836 times the gravitational mass of an electron…why there is no neutral mu-meson of mass 200,…why (h) is a constant…and why hc/e2 is always equal to (137).”