It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: Steffer
What would be the theoretical computer (processing power) required to render a 1920x1080 HD resolution screen at 1.855e+43 frames per second?
I'm very curious as to what some of the responses might be on this even if there isn't any possible answers.
Thanks.
Proof there is no conscious perception of time: This MUST BE TRUE, at least from our consciousness's point of view. Let me explain this... The human brain is connected to the outside world through millions of NEURONS. Each neuron is eventually connected to some kind of sensing device, like an eyeball or an eardrum. But here's the thing... all neural signals are DIGITIZED!!! That's how they operate; by digitally switching on and off. From this, we can conclude that even if the Universe were analog, our consciousness perception only sees digital. What makes it appear to be realistic and to seemingly flow from the past into the future is the resolution our Universe operates on. How many frames per second does our real Universe operate upon? The derivation for this is discussed below. Our Universe operates at about 1.855e+43 frames per second... amazingly fast... so fast that it's impossible for us to tell that time itself is DIGITIZED. I liken our perception of our Universe to the frames in a CGI movie... it doesn't matter how long it takes to compute each successive frame... nor does it matter if there are pauses in between the recording of the frames. Upon playback, although it appears that time is flowing smoothly, in reality, it is the sequence of frames that create this perceived tempo of time. The video can be paused and continued, and the characters within the movie (within the VR of the CGI) are none the wiser... each frame is a separate state, frozen in time, just like our mini-Universe examples.
I would say the question doesn't make much sense without further explanation or context.
originally posted by: Steffer
What would be the theoretical computer (processing power) required to render a 1920x1080 HD resolution screen at 1.855e+43 frames per second?
I'm very curious as to what some of the responses might be on this even if there isn't any possible answers.
Thanks.
Humans perceive a stable average intensity image without flicker artifacts when a television or monitor updates at a sufficiently fast rate. This rate, known as the critical flicker fusion rate, has been studied for both spatially uniform lights, and spatio-temporal displays. These studies have included both stabilized and unstablized retinal images, and report the maximum observable rate as 50–90 Hz. A separate line of research has reported that fast eye movements known as saccades allow simple modulated LEDs to be observed at very high rates. Here we show that humans perceive visual flicker artifacts at rates over 500 Hz when a display includes high frequency spatial edges. This rate is many times higher than previously reported. As a result, modern display designs which use complex spatio-temporal coding need to update much faster than conventional TVs, which traditionally presented a simple sequence of natural images.
I could really write a book on this subject in general but obviously not for a question that you call a "nonsensical and silly question". I said the human perception problem was the tip of the iceberg in the problems with this question, and you just touched on the main part of the iceberg.
originally posted by: Steffer
Yeah, I didn't think my question would make all too much sense but I at least wanted to try.
For example, my computer is 3.4 GHz and is capable of 1080 HD video playback at about 30 frames per second.
I'm not sure what the low end specs of a computer are needed for this type of playback though.
Anyway, I was just wondering how much energy (theoretical GHz I guess) would be required to generate that intense framerate on such a small scale.
So it uses 7168 GPUs which deliver the same performance as 50,000 CPUs. I'm not going to do exact calculations for your silly question but if ballpark you want a trillion trillion trillion times more performance than these 7168 GPUs then you'd need a trillion trillion trillion of these monsters, each of which uses roughly the same amount of power consumed by 4,000 homes, and that's presuming you could even hook all that processing power up to a display which might not even be possible with present technology, plus we don't have display technology that can effectively handle such high frame rates, so again the problems with this whole concept are so numerous that it's way more complex than just increasing gigahertz of the CPU.
7,168 NVIDIA Tesla M2050 GPUs coupled with Xeon and SPARC processors. According to center officials, it would have taken 50,000 CPUs and twice as much floor space to deliver the same performance using CPUs alone. In addition to its floating point performance, the Tianhe-1A machine has continued to deliver excellent performance per watt, consuming 4.04 megawatts.
originally posted by: Hyperboles
Ques:
Hey what would it take to generate cosmic rays in a lab?
I could really write a book on this subject in general but obviously not for a question that you call a "nonsensical and silly question".
Based on a calculation of neural decoherence rates, we argue that that the degrees of freedom of the human brain that relate to cognitive processes should be thought of as a classical rather than quantum system, i.e., that there is nothing fundamentally wrong with the current classical approach to neural network simulations. We find that the decoherence timescales ~10^[-13]-10^[-20] seconds are typically much shorter than the relevant dynamical timescales (~0.001-0.1 seconds), both for regular neuron firing and for kink-like polarization excitations in microtubules. This conclusion disagrees with suggestions by Penrose and others that the brain acts as a quantum computer, and that quantum coherence is related to consciousness in a fundamental way.
So this is an accomplishment but still it was done on a semiconductor chip which doesn't occur naturally.
In quantum physics, the creation of a state of entanglement in particles any larger and more complex than photons usually requires temperatures close to absolute zero and the application of enormously powerful magnetic fields to achieve. Now scientists working at the University of Chicago (UChicago) and the Argonne National Laboratory claim to have created this entangled state at room temperature on a semiconductor chip, using atomic nuclei and the application of relatively small magnetic fields.
originally posted by: moebius
a reply to: Phantom423
You get entanglement whenever two particles interact. The problem is keeping them entangled. Interaction with the environment messes it up rather quickly.
One term is "decoherence" which can degrade or terminate entanglement and it can result from interaction with the environment. Here are a couple of links with excerpts:
originally posted by: Phantom423
I don't understand what you mean by "interaction with the environment". If a pair of photons is entangled and separate in opposite directions, they could be anywhere in the universe. What type of environment would cause them to untangle (or detangle - not sure of the terminology).
Do you have a few references where I could get some detail about this. Thanks for the reply.
Since entanglement is critical factor in quantum information, and decoherence can degrade or terminate entanglement (the latter referred to as entanglement sudden death), preserving coherence is vital to the development of quantum computing, quantum cryptography, quantum teleportation, quantum metrology and other quantum information applications.
Decoherence attempts to explain the transition from quantum to classical by analyzing the interaction of a system with a measuring device or with the environment. It is convenient to imagine a quantum mechanical particle or system of particles as an isolated system floating in empty space. This simplification may be fine in some cases but in the real world there is no such thing as an isolated system. Typically a particle in flight will collide with air molecules or will emit thermal radiation that gets absorbed by the environment. Any interaction with the environment leads to an entanglement between the particle's state and the environment's state. As the entanglement diffuses throughout the environment the total state can no longer be separated into the direct product of a particle state and an environment state. What was once a superposition of particle states becomes a superposition of particle X environment states. At this point the particle ceases to act as if it were in a quantum superposition of states, instead acting as a statistical ensemble of states.
The end result of the decoherence process is that the particle will appear to have collapsed in a manner described by the Born probability law...
Decoherence tends to happen on an extremely fast timescale in most situations. The decoherence rate depends on several factors including temperature, uncertainty in position, and number of particles surrounding the system. Temperature affects the rate of blackbody radiation each radiated photon will interact with the environment. Uncertainty in position tends to create a wide range of interaction energies and thus a rapid spread in vector components. The number of particles in the surroundings affects the rate at which interactions can happen. The rule of thumb is that decoherence occurs when the environment gains enough information to learn something about an observable. In any case it takes only a few interactions before a system has become completely decoherent. A single collision with an air molecule is enough to cause a chain reaction of decoherence as the collision molecule in turn collides with its neighbors.
originally posted by: mbkennel
a reply to: joelr
Hey, was I on or off target on my description of quantum fields? It's not really my specialty at all---I'm pretty sure you know more.
In classroom examples, the mass and energy of the rope is often neglected, however like all simplifications that doesn't mean the mass and energy of the rope is zero, but if the rope is say 5 pounds that is fairly small compared to 500 pounds. Since kinetic energy is related to mass times velocity squared, if the mass of the rope is low compared to the block, so is its energy. So, most of the kinetic energy is in the falling block and not the rope.
originally posted by: tinymind
a reply to: Arbitrageur
I am wondering about the transfer of energy though a system.
I know I can raise a block of material which weighs 500 pounds by using a block and tackle with 5 pulleys. Giving me a 5:1 mechanical advantage in so far as lifting. My question is -- If I raise the block to a height of 10 feet and suddenly release it, what amount of energy is then imparted to the rope as it falls ?
Obviously if you attach the 100 pound weight before the fall while the 500 pound block is still stationary, that would prevent the fall.
Does the 5:1 ratio still apply to the long end of the rope; would a 100 pound resistance stop the fall, or would a much larger force be needed to stop the block from reaching the ground ?