It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

The "Zero Watt PC"

page: 1
10
<<   2 >>

log in

join
share:

posted on Feb, 9 2011 @ 11:14 AM
link   
I have been doing research into the SmartGrid and stumbled across this article.

www.smartgridnews.com...


IBM says it is teaming with an EU-funded consortium to lower the energy consumption of electronic devices by an order of magnitude. The group says it hopes to combine tunnel field effect transistors (TFETs) with semiconducting nanowires to create a “zero-watt PC.” And to do it in 36 months. And then to share the research so manufacturers can build gadgets that need only tiny sips of electricity when operating, and virtually nothing when in sleep mode.


This, I think is pretty remarkable. The article is dated October of 2010 so this places the "breakthrough" at around October of 2013. So, assuming we don't all die, ascend, turn into zombies, whatever... We should have some interesting breakthroughs in 2013 and 2014.

I dug around some more and found this: www.pcadvisor.co.uk...



"Our vision is to share this research to enable manufacturers to build the Holy Grail in electronics, a computer that utilises negligible energy when it's in sleep mode, which we call the zero-watt PC," said EPFL project coordinator Adrian Lonescu. The design could also be applied to portable electronic device processors as well, where it could potentially extend battery life.

The three-year project will explore an alternative design to the standard CMOS (complementary metal-oxide-semiconductor) designs used to build virtually all commercially available computer chips today. The new approach will use nanowire-based TFETs (tunnel field effect transistors), as an alternative to the MOSFTs (metal--oxide--semiconductor field-effect transistors) used in CMOS chips.


The idea of making new devices that consume electricity at 1/10th the amount of comperable devices of today has significant implications for power distro. This primarilly affects things like computers, thermostats, control panels on stoves and microwaves.



posted on Feb, 9 2011 @ 11:24 AM
link   
Exciting times for us, with scientists making good progress to commercial quantum computing just think of the power of computers and even games consoles in the next 5 - 10 years!



posted on Feb, 9 2011 @ 11:32 AM
link   
reply to post by roughycannon
 


Indeed. In fact I was just conversing with a friend the other day about "super-real" video games and neuro interfaces. These sorts of advances help make things like true VR come closer to reality.

I am particularly excited by advances like this because it is in bold moves that happy accidents are made that open up whole new fields of technology.



posted on Feb, 9 2011 @ 11:36 AM
link   
The PC may be zero watts, but the displays still suck down the juice. The computers in mobile phones are just fraction of what kills the batteries - it's the display that uses most of the power. If they can change that, then that would be awesome.



posted on Feb, 9 2011 @ 11:37 AM
link   
reply to post by harrytuttle
 


I agree.. The Kindle made some advances here with the concept of "electronic ink", but it is not a "constant framerate" kind of thing.. nor is it especially fast.

Would be interesting if they could find a way to generate light patterns using ultra-low amounts of energy.



posted on Feb, 9 2011 @ 11:46 AM
link   
 




 



posted on Feb, 9 2011 @ 11:49 AM
link   
reply to post by rogerstigers
 


I think it's going to be a long time before we start seeing displays that don't require backlighting, if indeed we ever do. I think the progress is to made in lower powered forms of lighting, ie what they are doing now with LED, this provides something around 40% less power consumption over CCFL's used in older LCD displays.



posted on Feb, 9 2011 @ 11:53 AM
link   
I'm totally nerdily into this kinda stuff.

Had a dual-core, hyperthreading atom processor on a mini-ITX that sipped on about 8w under load.

It actually did fairly okay in windows7. I could watch some 1080p with avcore pro.

In the end I ditched it in favor of a $39 core2 celeron, that only uses about 20w.

I hope this "zero" watt processor is at least as good as the atom. Guess we'll have to wait and see.



posted on Feb, 9 2011 @ 11:59 AM
link   

Originally posted by woogleuk
reply to post by rogerstigers
 


I think it's going to be a long time before we start seeing displays that don't require backlighting, if indeed we ever do. I think the progress is to made in lower powered forms of lighting, ie what they are doing now with LED, this provides something around 40% less power consumption over CCFL's used in older LCD displays.



OLED doesnt require back lighting as they themselves are lit

OLED wiki


Better power efficiency: LCDs filter the light emitted from a backlight, allowing a small fraction of light through so they cannot show true black, while an inactive OLED element does not produce light or consume power.


The new Sony PSP2 has an OLED screen on it.



posted on Feb, 9 2011 @ 12:02 PM
link   
"zero watt" (yawn) got one on the shelf already( sales are mysteriously slow though
Interesting post thanks.





en.wikipedia.org...

edit on 9-2-2011 by 46ACE because: (no reason given)

edit on 9-2-2011 by 46ACE because: (no reason given)



posted on Feb, 9 2011 @ 12:04 PM
link   

Originally posted by roughycannon
Exciting times for us, with scientists making good progress to commercial quantum computing just think of the power of computers and even games consoles in the next 5 - 10 years!


Well that's all well and good, but just think how much Humanity would advance with quantum computers, new graphics cards everyday? Building a better computer everyday? Oh with twice the power?
Could be heading that way, moores law and all



posted on Feb, 9 2011 @ 12:10 PM
link   
Twice the power? try thousands of times the power there are a lot of articles on science daily that talk about the advancments in quantum computing...

Science daily quantum computers



posted on Feb, 9 2011 @ 12:14 PM
link   
reply to post by roughycannon
 


I was using LED as an example, besides, OLED can end up using more power than an LCD depending on the image shown, and, as of yet, isn't a very reliable technology, it's getting there slowly though. But, as this is a thread about power consumption, OLED at it's lowest consumption uses about the same as LED....AFAIK.



posted on Feb, 10 2011 @ 01:00 AM
link   
Wow.. if this pans out its going to be huge as well as a massive cash cow. Just imagine what else this techology can be adapted to.

It would certainly open the doors wider for a permanent presence in space by reducing the larger energy collection components, reduced weight etc etc etc.



posted on Feb, 10 2011 @ 08:04 AM
link   
reply to post by 46ACE
 


Those are great, they never crash, never need a hard boot, are extremely accurate, and fast as well.


Don’t worry when the EMP blast comes and takes down the power grid, the sales will go up.



posted on Feb, 10 2011 @ 08:13 AM
link   
Unfortunately, this kind of technology would be great for tracking devices of any kind. There's very strong potential that this could dwarf or replace their RFID chip plan.



posted on Feb, 10 2011 @ 09:05 AM
link   

Originally posted by Xcathdra
It would certainly open the doors wider for a permanent presence in space by reducing the larger energy collection components, reduced weight etc etc etc.
I'm not certain how useful it will be in space applications. Power is a big consideration, but so is resistance to high levels of radiation, and depending on the mission, it may not only have to withstand much higher levels of everyday radiation, but may need to resist being fried by CMEs or coronal mass ejections.

I'd love to see more energy efficient video cards..I'm appalled that most of the video cards nowadays put out so much heat they need their own separate fans, and the fan adds to the high heat and power draw problem by drawing even more power just to run the fan!

I try to only buy fanless video cards.

Nvidia has made great strides in the energy efficiency of supercomputers like the Nvidia powered world's most powerful 2.5 petaflopper in China, but they still have a long way to go to catch up to the energy efficiency of the CPU makers Intel.

Looking ahead, more and more energy efficient machines will be demanded...I think the power savings on that supercomputer in China is enough to power thousands of homes (savings compared to using all CPUs instead of using Nvidia graphics chips).



posted on Feb, 11 2011 @ 11:48 AM
link   

Originally posted by Arbitrageur
I'd love to see more energy efficient video cards..

Energy efficiency does not necessarily have anything to do with power consumption. If something is energy efficient yet still consumes a large amount of electricity then it's going to be very fast. Newer graphics cards are very energy efficient (minus Nvidia GTX 4xx series) yet they draw a lot of power because they are fast. They also have a very low idle power consumption on the order of 20 watts, compared to 200 to 250 watts load for a typical high-end videocard.


Nvidia has made great strides in the energy efficiency of supercomputers like the Nvidia powered world's most powerful 2.5 petaflopper in China, but they still have a long way to go to catch up to the energy efficiency of the CPU makers Intel.

Actually, graphics cards have a completely different design than CPU's. They do not focus on serial performance, but rather total parallel throughput. In extremely parallel workloads graphics cards will be more than one order of magnitude faster than a CPU. Example: Try running a 3d application in software rendering, then try graphics card hardware accelerated rendering. That's why a high-end graphics card like the Radeon 6970 have a total computational power of 2.7 terraflops which is massively higher (>10 times) than even the fastest CPUs today. The design of graphics cards also makes them very inflexible compared to a CPU, which is why for example, we don't have Windows running on our graphics card.




Reading AnandTech's Core i7 980X review got me thinking. CPU single-thread performance has roughly doubled over the past four years. And we have six cores instead of just two, for a total speedup in the 5-7x range. In the last two years, GPU performance has quadrupled.

The current top-of-the-line CPU (Core i7 980X) does around 100 GFLOPS at double-precision. That's for parallelized and vectorized code, mind you. Single-threaded scalar code fares far worse. Now, even the 100 GFLOPS number is close to a rounding error compared to today's top-of-the-line GPU (Radeon HD 5970) with its 928 GFLOPS at double-precision and 4640 GFLOPS at single-precision. Comparing GFLOPS per dollar, the Core i7 980X costs $999 and gets roughly 0.1 GFLOPS/$, whereas the HD 5970 costs $599 and gets 1.5 GFLOPS/$ at double precision and 7.7 GFLOPS/$ at single precision.

Anyhow, looking at number-crunching price-performance, the HD 5970 is 15x better value for doubles and 43x better value for floats compared to the 100 GFLOPS and 180 GFLOPS numbers. If you want dramatic performance numbers to wow your boss with, port some single-threaded non-vectorized 3D math to the GPU: the difference in speed should be around 700x. If you've also strategically written the code in, say, Ruby, a performance boost of four orders of magnitude is not a dream!

With regard to performance-per-watt, the Core i7 980x uses 100W under load, compared to the 300W load consumption of the HD 5970. The 980x gets 1 GFLOPS/W for doubles and 1.8 GFLOPS/W for floats. The HD 5970 does 3.1 GFLOPS/W for doubles and 15.5 GFLOPS/W for floats.

forum.elitebastards.com...


i.e. a GPU is vastly more efficient and faster than any CPU if the workload is optimized for it.
edit on 11/2/11 by C0bzz because: (no reason given)



posted on Feb, 11 2011 @ 11:57 AM
link   
reply to post by C0bzz
 

^This. Graphics cards perform a very specialist role. Their architecture is completely different to a general purpose CPU so a comparrison between the two is more like apples and oranges.



posted on Feb, 11 2011 @ 01:08 PM
link   

Originally posted by C0bzz
Actually, graphics cards have a completely different design than CPU's. They do not focus on serial performance, but rather total parallel throughput. In extremely parallel workloads graphics cards will be more than one order of magnitude faster than a CPU.
I think you, me and john_bmth are saying the same thing, sort of, no doubt they play different roles.


Originally posted by C0bzz
i.e. a GPU is vastly more efficient and faster than any CPU if the workload is optimized for it.
Well that certainly seems to be the case with the world's largest supercomputer in China which is why I mentioned it. But those GPUs are optimized for a supercomputing workload, and not for home use:

www.nvidia.com...

pressroom.nvidia.com...


The system uses 7,168 NVIDIA® Tesla™ M2050 GPUs and 14,336 CPUs; it would require more than 50,000 CPUs and twice as much floor space to deliver the same performance using CPUs alone.

More importantly, a 2.507 petaflop system built entirely with CPUs would consume more than 12 megawatts. Thanks to the use of GPUs in a heterogeneous computing environment, Tianhe-1A consumes only 4.04 megawatts, making it 3 times more power efficient -- the difference in power consumption is enough to provide electricity to over 5000 homes for a year.



Originally posted by john_bmth
Graphics cards perform a very specialist role. Their architecture is completely different to a general purpose CPU so a comparrison between the two is more like apples and oranges.
Yes and no. Yes they are a completely different architecture but there's a way to make apples to apples comparisons if you compare the petaflops output of a supercomputer:

You can get 2.5 petaflops with either 50,000 CPUs or with 7.16 GPUs plus 14,336 CPUs, that's an apples to apples comparison in petaflops output, right?

C0bzz, my point was that the last time I checked into efficiency of what was available for home PCs (almost a year ago in 2010) it was hard to find a reasonably priced Nvidia card with a die size as small as 40nm and they had nothing smaller, while the Intel Core i3 has CPU and graphics both on the same die using 32nm die size. So I seriously doubt that the 40nm technology on a separate video card can compete with the energy efficiency of the built in graphics on the core i3 chip but if you know otherwise, enlighten me.

As far as I could tell the Intel i3 was more efficient any way you sliced it for a home PC, partly because no graphics maker at the time had a 32nm die size like the i3. But there's no doubt the NVIDIA® Tesla™ M2050 GPU is extremely efficient when used in supercomputers.
edit on 11-2-2011 by Arbitrageur because: fix typo




top topics



 
10
<<   2 >>

log in

join