It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
IBM says it is teaming with an EU-funded consortium to lower the energy consumption of electronic devices by an order of magnitude. The group says it hopes to combine tunnel field effect transistors (TFETs) with semiconducting nanowires to create a “zero-watt PC.” And to do it in 36 months. And then to share the research so manufacturers can build gadgets that need only tiny sips of electricity when operating, and virtually nothing when in sleep mode.
"Our vision is to share this research to enable manufacturers to build the Holy Grail in electronics, a computer that utilises negligible energy when it's in sleep mode, which we call the zero-watt PC," said EPFL project coordinator Adrian Lonescu. The design could also be applied to portable electronic device processors as well, where it could potentially extend battery life.
The three-year project will explore an alternative design to the standard CMOS (complementary metal-oxide-semiconductor) designs used to build virtually all commercially available computer chips today. The new approach will use nanowire-based TFETs (tunnel field effect transistors), as an alternative to the MOSFTs (metal--oxide--semiconductor field-effect transistors) used in CMOS chips.
Originally posted by woogleuk
reply to post by rogerstigers
I think it's going to be a long time before we start seeing displays that don't require backlighting, if indeed we ever do. I think the progress is to made in lower powered forms of lighting, ie what they are doing now with LED, this provides something around 40% less power consumption over CCFL's used in older LCD displays.
Better power efficiency: LCDs filter the light emitted from a backlight, allowing a small fraction of light through so they cannot show true black, while an inactive OLED element does not produce light or consume power.
Originally posted by roughycannon
Exciting times for us, with scientists making good progress to commercial quantum computing just think of the power of computers and even games consoles in the next 5 - 10 years!
I'm not certain how useful it will be in space applications. Power is a big consideration, but so is resistance to high levels of radiation, and depending on the mission, it may not only have to withstand much higher levels of everyday radiation, but may need to resist being fried by CMEs or coronal mass ejections.
Originally posted by Xcathdra
It would certainly open the doors wider for a permanent presence in space by reducing the larger energy collection components, reduced weight etc etc etc.
Originally posted by Arbitrageur
I'd love to see more energy efficient video cards..
Nvidia has made great strides in the energy efficiency of supercomputers like the Nvidia powered world's most powerful 2.5 petaflopper in China, but they still have a long way to go to catch up to the energy efficiency of the CPU makers Intel.
Reading AnandTech's Core i7 980X review got me thinking. CPU single-thread performance has roughly doubled over the past four years. And we have six cores instead of just two, for a total speedup in the 5-7x range. In the last two years, GPU performance has quadrupled.
The current top-of-the-line CPU (Core i7 980X) does around 100 GFLOPS at double-precision. That's for parallelized and vectorized code, mind you. Single-threaded scalar code fares far worse. Now, even the 100 GFLOPS number is close to a rounding error compared to today's top-of-the-line GPU (Radeon HD 5970) with its 928 GFLOPS at double-precision and 4640 GFLOPS at single-precision. Comparing GFLOPS per dollar, the Core i7 980X costs $999 and gets roughly 0.1 GFLOPS/$, whereas the HD 5970 costs $599 and gets 1.5 GFLOPS/$ at double precision and 7.7 GFLOPS/$ at single precision.
Anyhow, looking at number-crunching price-performance, the HD 5970 is 15x better value for doubles and 43x better value for floats compared to the 100 GFLOPS and 180 GFLOPS numbers. If you want dramatic performance numbers to wow your boss with, port some single-threaded non-vectorized 3D math to the GPU: the difference in speed should be around 700x. If you've also strategically written the code in, say, Ruby, a performance boost of four orders of magnitude is not a dream!
With regard to performance-per-watt, the Core i7 980x uses 100W under load, compared to the 300W load consumption of the HD 5970. The 980x gets 1 GFLOPS/W for doubles and 1.8 GFLOPS/W for floats. The HD 5970 does 3.1 GFLOPS/W for doubles and 15.5 GFLOPS/W for floats.
forum.elitebastards.com...
I think you, me and john_bmth are saying the same thing, sort of, no doubt they play different roles.
Originally posted by C0bzz
Actually, graphics cards have a completely different design than CPU's. They do not focus on serial performance, but rather total parallel throughput. In extremely parallel workloads graphics cards will be more than one order of magnitude faster than a CPU.
Well that certainly seems to be the case with the world's largest supercomputer in China which is why I mentioned it. But those GPUs are optimized for a supercomputing workload, and not for home use:
Originally posted by C0bzz
i.e. a GPU is vastly more efficient and faster than any CPU if the workload is optimized for it.
The system uses 7,168 NVIDIA® Tesla™ M2050 GPUs and 14,336 CPUs; it would require more than 50,000 CPUs and twice as much floor space to deliver the same performance using CPUs alone.
More importantly, a 2.507 petaflop system built entirely with CPUs would consume more than 12 megawatts. Thanks to the use of GPUs in a heterogeneous computing environment, Tianhe-1A consumes only 4.04 megawatts, making it 3 times more power efficient -- the difference in power consumption is enough to provide electricity to over 5000 homes for a year.
Yes and no. Yes they are a completely different architecture but there's a way to make apples to apples comparisons if you compare the petaflops output of a supercomputer:
Originally posted by john_bmth
Graphics cards perform a very specialist role. Their architecture is completely different to a general purpose CPU so a comparrison between the two is more like apples and oranges.