It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

The "Zero Watt PC"

page: 2
10
<< 1   >>

log in

join
share:

posted on Feb, 11 2011 @ 02:01 PM
link   

Originally posted by Arbitrageur
Yes and no. Yes they are a completely different architecture but there's a way to make apples to apples comparisons if you compare the petaflops output of a supercomputer:

You can get 2.5 petaflops with either 50,000 CPUs or with 7.16 GPUs plus 14,336 CPUs, that's an apples to apples comparison in petaflops output, right?

Not unless you are processing the same data. A ship may have 2 thousand horsepower but will it outpace a drag racer of the same power? Also, the task would have to lend itself to leverage both architectures effectively, otherwise would the race take place on land or in the sea? Considering their differing roles and architecture, any comparative test would be such a narrow scope that any metrics would be pretty much meaningless. I'm not saying that such tasks don't exist because they do, supercomputers have been built form off-the-shelf video cards but the tasks they are performing are very specialist and the main advantage of a CPU over a GPU is their general purpose nature
Once you're performing a task that doesn't lend itself to parallization (or relies heavilly on bread and butter things like flow control which a CPU will take in it's stride but can choke a GPU) then the unfairness swings in the other direction. It's all well and good comparing lap times but can a drag car transport thousands of tonnes of cargo across the Atlantic?

edit on 11-2-2011 by john_bmth because: clarified analogy




posted on Feb, 11 2011 @ 03:28 PM
link   

Originally posted by john_bmth
Not unless you are processing the same data.
Apparently they use the same benchmark for the CPU and the GPU based systems, called LINPACK:

en.wikipedia.org...


Rmax – The highest score measured using the LINPACK benchmark suite. This is the number that is used to rank the computers. Measured in trillions of floating point operations per second, i.e. Teraflops.


All benchmark tests have limitations in reflecting real world performance in different applications running on different hardware, but it's the closest thing we have to an apples and apples comparison and it appears to be a fairly standardized benchmark for supercomputers.

The fact that not all processes can be made parallel is a serious limitation for all supercomputers and it affects them to varying degrees.

Here's some information about the supercomputer benchmarks and an alternative benchmark to the LINPACK (or LAPACK):

www.tikalon.com...



posted on Feb, 11 2011 @ 03:40 PM
link   
reply to post by Arbitrageur
 


But with a universal benchmark you're testing the lowest common denominator as FP ops per second only one aspect when there’s a large body of general purpose CPU tasks simply cannot be performed on a GPU. Apples and oranges are both fruit so we can compare the attributes common to both but it's still an apples and oranges comparison.



posted on Feb, 11 2011 @ 09:24 PM
link   

Originally posted by 46ACE
"zero watt" (yawn) got one on the shelf already( sales are mysteriously slow though
Interesting post thanks.





en.wikipedia.org...

edit on 9-2-2011 by 46ACE because: (no reason given)

edit on 9-2-2011 by 46ACE because: (no reason given)



"The abacus was in use centuries before the adoption of the written modern numeral system"



posted on Feb, 13 2011 @ 02:37 PM
link   
YEah this has huge implications if successful and affordable to the mainstream. Obviously with less P, the amount of voltage and amperage goes down as well. The batteries used to power portable devices will shrink and allow for more room to place more powerful CPU under the hood. If they can figure out how to translate this idea to a backlit display then we're talking a gigantic breakthrough. These little screens are battery killers.



posted on Feb, 15 2011 @ 06:23 AM
link   
I must admit it would be real nice to lower my power-bill especially with all the recent rate rises but how many of these 'breakthroughs' ever make it to the real world. I've seen on the news many a time about 'a new cancer drug that will cure cancer, will be ready in 2 years" or "a new fat-pill, ready in 2 years" the 2 years comes and goes and nothing ever comes of it.



posted on Feb, 19 2011 @ 11:09 AM
link   

C0bzz, my point was that the last time I checked into efficiency of what was available for home PCs (almost a year ago in 2010) it was hard to find a reasonably priced Nvidia card with a die size as small as 40nm and they had nothing smaller,


OK now I understand. Intel is usually a few months ahead in terms of manufacturing process, the newest graphics cards are still 40nm whereas Intel has been as 32nm for over a year. That is in part, because apparently TSMC (the manufacturing company that makes graphics card chips) was having difficulty with 32nm, so they're going straight from 40nm to 28nm which will take somewhat longer. Still, I doubt Intel has the graphics know-how of Nvidia or AMD, so I doubt they can design an actual architecture as good and as anything Nvidia or AMD has so it's hard to say whether the Intel GPUs integrated into some processors or the AMD / Nvidia GPUs are more efficient. Never seen any comparisons..



As far as I could tell the Intel i3 was more efficient any way you sliced it for a home PC, partly because no graphics maker at the time had a 32nm die size like the i3. But there's no doubt the NVIDIA® Tesla™ M2050 GPU is extremely efficient when used in supercomputers.

Graphics cards are vastly more efficient at some tasks though, like Folding@home. Problem is, those programs are far and between, except for graphics rendering of course! Specialized hardware is always really fast... e.g. intel quick sync.
edit on 19/2/11 by C0bzz because: (no reason given)



posted on Feb, 19 2011 @ 11:18 AM
link   
reply to post by C0bzz
 


Graphics chips are designed for a single purpose though, unlike CPU's, what amazes me is why people have to go out and spend hundreds on the latest greatest DX11 cards, when I picked up a ATi 4870x2 off Ebay for 120 quid, and no matter what I throw at it, highest details and resolutions, full AA and AF, it still runs smooth.

My biggest issue is the 90+ degrees (C) it is chugging out, seriously, I haven't had to put my heating on in the flat in the 8 months I have owned it.
edit on 19/2/11 by woogleuk because: (no reason given)



new topics

top topics



 
10
<< 1   >>

log in

join