It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

The new Top 500 supercomputers list has been announced

page: 1
0

log in

join
share:

posted on Nov, 20 2005 @ 10:47 PM
link   
Top 500.org has announced the list of the 500 most powerful supercomputer of the world.
After many years of japanese supremacy the new undisputed king is IBM's Blue Gene for the Laurence Livermore Lab.

info: www.top500.org...

"The No. 1 position was again claimed by the BlueGene/L System, a joint development of IBM and DOE's National Nuclear Security Administration (NNSA) and installed at DOE's Lawrence Livermore National Laboratory in Livermore, Calif. BlueGene/L also occupied the No. 1 position on the last two TOP500 lists. However, the system was doubled in size during the last six months and reached a new record Linpack benchmark performance of 280.6 TFlop/s ("teraflops" or trillions of calculations per second). No other system has yet exceeded the level of 100 TFlop/s and this system is expected to remain the No. 1 Supercomputer in the world for the next few editions of the TOP500 list."

Earth Simulator (NEC Japan) is currently number 7 and the only top 15 vector computer

[edit on 20-11-2005 by carcharodon]



posted on Nov, 20 2005 @ 11:28 PM
link   
The thing they don't bother telling anybody about, is that these machines are HUGE! Even bigger than the vector computers from Cray. Also, many of the numbers have been synthesized in the past. Most of the test programs or the computers themselves cannot use the whole machine at the same time. One of the 2 just does not scale. Also, most of these tests, linpack in this case, really do not have straightforward comparisons to real world applications.

The key is performance per processor. The Blue Gene computers from the Itty Bitty Machine company are only running around 2-3 Gflops per processor. The Cray X1E is running over 15 Gflop per processor. And if you look further down the list, a Hitachi is running at over 100 Gflop per processor. With 131000 processors, that Hitachi would be running at over 14 petaflops (14,541,000 Gflops).

Unfortunately, the cost of the high performance processors and the ability to make programs to run on them is extremely high.

And one last thing. Something that off the shelf hardware will never get you, and is not seen in the top500 list, bandwidth. Its much easier to process [insert favorite conspiracy here] with massive amounts of bandwidth. Which of course is something else that is not cheap.



posted on Nov, 21 2005 @ 02:52 AM
link   
Of course they're huge, that's why they are SUPER COMPUTERS.

You can't throw real world apps on these for testing because the apps would have to written for each individual architecture, which are not the same by the way, and being customized for each machine would skew all the test results and be useless.

Multiprocessor systems leverage more slower speed processors to do more work. Dual core desktops are more powerful than fast single core desktops. IBM is clearly demonstrating this point. The slower chips would cost less overall in terms of processing power. Plus scaling up systems with really high clock frenquencies would be difficult to synchronize. Plus they are expensive plus their very architecture may preclude them from being used on a massive scale, or else it would have been done to take the top of the list.

Bandwidth...big deal, to these guys, and I don't think they use too much off the shelf hardware on these machines.

So stop acting like such a X-BOX fanboy, these aren't desktop systems.

You can build a nice little 4-node or more super rig at home for not a lot of money.

Then you can start your own TOP 500 home super rigs.



posted on Nov, 21 2005 @ 07:53 AM
link   

Originally posted by CAPT PROTON
You can't throw real world apps on these for testing because the apps would have to written for each individual architecture, which are not the same by the way, and being customized for each machine would skew all the test results and be useless.

All of these computer systems have large numbers of application programmers to port different apps to their hardware. In fact, even Linpack has to be ported to each architecture. If you cannot throw real world tests at these machines... what is the point? Eventually they are going to run real world applications... why not real world tests?


Bandwidth...big deal, to these guys, and I don't think they use too much off the shelf hardware on these machines.

How many applications and games do you run that the only thing that happens is number crunching by the cpu, with no reading/writing data to/from memory or disk? Oh yeah, DIMM [dual inline memory module: increase bandwidth], DDR [dual data rate: increase bandwidth], dualchannel [yes, increase bandwidth]. Off the shelf hardware? Most of them on the list do use a commodity processor. Tying your 1000 x-box's together via ethernet does not make it a supercomputer.

While you increase processor count, you increase the communication required to get work to each processor. Otherwise the processors would sit there idle with nothing to do. Blue Gene can move data around 350 MB/sec between nodes, the Cray X1E can move data around 50 GB/sec between nodes.

"If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?" - Seymour Cray

And synchronizing clocks is the least of your worry on a large system. In fact, that is fairly easy, even when you include clock drift.



posted on Nov, 21 2005 @ 06:27 PM
link   
the thing about blue gene is that its design reduces latency to a minimal. also since memory network and processor are united the speed at which the operate between each other is extremely fast because it doesn't need external buses or anything.

so i guess this is a workaround the problems. About vector computers that Fujitsu project is not that simple to scale it to petaflops. adding more processor wont make your machine more powerful, specially since all the hazards that vector computers have regarding infrastructure



posted on Nov, 21 2005 @ 06:49 PM
link   
I hear you there carcharodon. I deal with vector stuff almost every day. The users don't like it when the kernel slows down their vector applications.

I did a little more research into the Hitachi machine, its running a Power5+ processor, which can have up to 8 cpu cores in it. Though I found nothing as to how many cores it was actually running on.

I also looked into the Blue Gene as well. And it was designed to be a MPI machine. It has impressively low latency for that type of stuff.

The Cray X1 was theoretically supposed to scale to 63 cabinets, which would be around 4k processors, I think. I don't feel like looking it up. If someone could have afforded it, I believe the X1 could have made it to 4000 processors. Which would put the performance around 48 Tflops. Not superfast compared to the Blue Gene, but so much smaller physically.



new topics

top topics
 
0

log in

join