It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
She's not saying anything I haven't said 100 times already in this thread, though I think "wrong" needs to be qualified and is not so black and white in the case of Newton's model.
originally posted by: Hyperboles
a reply to: Arbitrageur
How come Prof ghez from UCLA is talking about newton being wrong recently?
Maybe a better alternative to what Ghez says about Newton's model being wrong would be to call it "incomplete", in the sense that Einstein was well aware of centuries of observations consistent with Newton's model, so he took care to explain that his then new theory of relativity simplified to Newton's model in the limited case (the limitation being velocities much lower than the speed of light and relatively weak gravitational fields).
The basic trouble, you see, is that people think that "right" and "wrong" are absolute; that everything that isn't perfectly and completely right is totally and equally wrong.
However, I don't think that's so. It seems to me that right and wrong are fuzzy concepts, and I will devote this essay to an explanation of why I think so...
Since the refinements in theory grow smaller and smaller, even quite ancient theories must have been sufficiently right to allow advances to be made; advances that were not wiped out by subsequent refinements...
Naturally, the theories we now have might be considered wrong..., but in a much truer and subtler sense, they need only be considered incomplete.
One of the common questions or comments we get on PF is the claim that classical physics or classical mechanics (i.e. Newton’s laws, etc.) is wrong because it has been superseded by Special Relativity (SR) and General Relativity (GR), and/or Quantum Mechanics (QM). Such claims are typically made by either a student who barely learned anything about physics, or by someone who have not had a formal education in physics. There is somehow a notion that SR, GR, and QM have shown that classical physics is wrong, and so, it shouldn’t be used.
There is a need to debunk that idea, and it needs to be done in the clearest possible manner. This is because the misunderstanding that results in such an erroneous conclusion is not just simply due to lack of knowledge of physics, but rather due to something more inherent in the difference between science and our everyday world/practices. It is rooted in how people accept certain things and not being able to see how certain idea can merge into something else under different circumstances.
Before we deal with specific examples, let’s get one FACT straighten out:
Classical physics is used in an overwhelming majority of situations in our lives. Your houses, buildings, bridges, airplanes, and physical structures were built using the classical laws. The heat engines, motors, etc. were designed based on classical thermodynamics laws. And your radio reception, antennae, TV transmitters, wi-fi signals, etc. are all based on classical electromagnetic description.
These are all FACTS, not a matter of opinion. You are welcome to check for yourself and see how many of these were done using SR, GR, or QM. Most, if not all, of these would endanger your life and the lives of your loved ones if they were not designed or described accurately. So how can one claim that classical physics is wrong, or incorrect, if they work, and work so well in such situations?
Quantum stabilised atom mirror which, despite small holes and “islands”, mostly has a smooth surface...
There is no such thing as a one-photon-thick beam of light. Photons are not solid little balls that can be lined up in a perfectly straight beam that is one photon wide. Instead, photons are quantum objects. As such, photons act somewhat like waves and somewhat like particles at the same time. When traveling through free space, photons act mostly like waves. Waves can take on a variety of beam widths. But they cannot be infinitely narrow since waves are, by definition, extended objects. The more you try to narrow down the beam width of a wave, the more it will tend to spread out as it travels due to diffraction. This is true of water waves, sound waves, and light waves. The degree to which a light beam diffracts and diverges depends on the wavelength of the light. Light beams with larger wavelengths diverge more strongly than light beams with smaller wavelengths, all else being equal. As a result, smaller-wavelength beams can be made much narrower than larger-wavelength beams. The narrowness of a light beam therefore is ultimately limited by wave diffraction, which depends on wavelength, and not by a physical width of photon particles. The way to get the narrowest beam of light possible is by using the smallest wavelength available to you and focusing the beam, and not by lining up photons (which doesn't really make sense in the first place).
Furthermore, photons are bosons, meaning that many photons can overlap in the exact same quantum state. Millions of photons can all exist at the same location in space, going the same direction, with the same polarization, the same frequency, etc. In this additional way, the notion of a "one-photon-thick" beam of light does not really make any sense. Coherent beams such as laser beams and radar beams are composed of many photons all in the same state. The number of photons in a light beam is more an indication of the beam's brightness than of the beam's width. It does make sense to talk about a beam with a brightness of 1 photon per second. This statement means that a sensor receives one photon of energy from the light beam every second (which is a very faint beam of light, but is encountered in astronomy). Furthermore, we could construct a light source that only emits one photon of light every second. But we discover that as soon as the photon travels out into free space, the single photon spreads out into a wave that has a non-zero width and acts just like a coherent beam containing trillions of photons. Therefore, even if a beam has a brightness of only one photon per second, it still travels and spreads out through space like any other light beam.
My previous link explained some limitations to making photon beams narrow, but it didn't mention uncertainty. There certainly have been experiments which seem to indicate that a certain amount of uncertainty is fundamental.
originally posted by: Archivalist
If no one is willing to tackle these questions, or determine experimental methods to test "questions of uncertainty" how do we know those are really barriers of physics?
Light which is confined to a small aperture does indeed spread out in accordance with the uncertainty principle. This applies to laser beams as well. If you measure the laser beam at varying distances, you will find that its size continually increases at a hyperbolic rate. In fact, take your laser out to a field at night and look at it on a screen as far away as you can, and the beam will be huge!
If you tried to hit the moon with a laser, you'd want to use a BIG beam- 8 meters (24 feet) in radius. By the time it reached the moon, it would "only" be 16 meters across. That's the smallest spot you could make on the moon. In contrast, if you shone your handheld laser at the moon, it would expand to over 60,000 meters (~40 miles) wide!
An interesting fact to notice, however, is that the uncertainty principle only fixes the minimum amount a beam can spread out. Most beams spread out much faster than Heisenberg requires. A gaussian-shaped laser beam is one of the only types of light beams that actually spreads out by this minimum amount. That is why you don't see it spreading out unless you measure carefully, or just measure its size far away from the laser. (In addition, keep in mind that a typical laser beam is over 1000 times wider than the wavelength of light. This is reasonably large, so the beam doesn't diffract very rapidly. If your laser beam were much smaller, it would diverge faster.)
Earlier than 10^-36 seconds, we simply don't understand the nature of the universe. The Big Bang theory is fantastic at describing everything after that, but before it, we're a bit lost. Get this: At small enough scales, we don't even know if the word "before" even makes sense! At incredibly tiny scales (and I'm talking tinier than the tiniest thing you could possible imagine), the quantum nature of reality rears its ugly head at full strength, rendering our neat, orderly, friendly spacetime into a broken jungle gym of loops and tangles and rusty spikes. Notions of intervals in time or space don't really apply at those scales. Who knows what's going on?
There are, of course, some ideas out there — models that attempt to describe what "ignited" or "seeded" the Big Bang, but at this stage, they're pure speculation. If these ideas can provide observational clues — for example, a special imprint on the CMB, then hooray — we can do science!
If not, they're just bedtime stories.
I don't think I've ever seen his model but I've seen some corkscrew youtube animations by others that never get it quite right because they use 90 degree angles instead of 60 degrees, and they don't show this:
originally posted by: ErosA433
The model you are referring to from DjSadhu is quite inaccurate for a few reasons...
the planets orbit around the sun at a plane set at about 60degrees to the plane of the milky way.
That rise and dip is never shown in any corkscrew animation I've seen.
The suns path around the galactic disk is quite circular as far as we can tell, we rise and dip in and out of the disk.
I'm not understanding why you think that makes the accepted definition of "orbit" inaccurate. For example here's a definition I found for orbit:
If no planets ever goes in front of the suns forward trajectory, aka in front of ol Sol as it flies forward (or falls into anothers gravity well) throughout the solar system, then what is a more correct definition of 'orbit'?
originally posted by: Arbitrageur
I don't think I've ever seen his model but I've seen some corkscrew youtube animations by others that never get it quite right because they use 90 degree angles instead of 60 degrees, and they don't show this:
originally posted by: ErosA433
The model you are referring to from DjSadhu is quite inaccurate for a few reasons...
the planets orbit around the sun at a plane set at about 60degrees to the plane of the milky way.
So I think the origins of this incorrect 90 degree angle go as far back as 1989, since you can see it in the screen of that 1989 computer model.
The animation is a video screenshot from Voyage through The Solar System version 1.20 (1989) and shows the earth´s true motion in spirals.
Those questions seem to be posed as if there's a static or not moving situation when those occur. But the moon moves around the Earth at 2,288 miles per hour (3,683 kilometers per hour), and that is fast enough so that it keeps falling toward the Earth but never hits the Earth. The earth is also falling toward the moon. If the Earth and the moon were of equal mass, they would orbit a common center of gravity (called a barycenter) halfway between them, which would look something like this:
* Why does Gaia not 'fall' into Lunar when its between us an Sol?
* Why does Lunar not 'fall' into Gaia when its between us an Sol?
originally posted by: Skyfox81
a reply to: Arbitrageur
How do gravity wells work in space, on a solar scale?
The Hubble Constant is a correlation between the distance of galaxy clusters and their recession velocities. Dark flow is a claimed deviation from that where the recession velocity of a cluster of galaxies does not align with the Hubble constant.
originally posted by: MaxNocerino8
What is Dark Flow?
24 Sep 2008 - Kashlinsky et al. (2008) have claimed a detection of a bulk flow in the motion of many distant X-ray emitting clusters of galaxies. Unfortunately this paper and the companion paper have several errors so their conclusions cannot be trusted.
Planck data also set strong constraints on the local bulk flow in volumes centred on the Local Group. There is no detection of bulk flow as measured in any comoving sphere extending to the maximum redshift covered by the cluster sample. A blind search for bulk flows in this sample has an upper limit of 254 km s^−1 (95% confidence level) dominated by CMB confusion and instrumental noise, indicating that the Universe is largely homogeneous on Gpc scales.