It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Breakthrough: The Secret to Making Processors 1,000 Times Faster [VIDEO]

page: 1
14
<<   2 >>

log in

join
share:

posted on Sep, 11 2011 @ 12:18 PM
link   
I'll admit it... I'm a technology buff. I especially like reading about innovations in computer tech. I came across something today that I found very interesting.

IBM and 3M have combined forces and come up with a novel method to create amazingly fast processors... glue them together. Not just any glue, though.

From the article:


This is not just any glue. It’s an adhesive that dissipates heat so efficiently that layer upon layer of chips can be stacked on top of each other into silicon “towers” up to 100 layers high, glued together with this special adhesive that keeps things cool. The result? Faster chips for computers, laptops, smartphones and anything else that uses microprocessors.

With IBM supplying its microprocessor and silicon expertise and 3M contributing its super-cool adhesive, the two companies aim to stack together processors, memory chips and networks into monster “skyscrapers” of silicon they say will be 1,000 times faster than today’s fastest processor.

link

They hope to have this available for servers by the end of 2013 and generally available about a year later. No info on how much these might cost. If anyone can find that, please let us know.

Here's the video (no sound):



posted on Sep, 11 2011 @ 01:02 PM
link   
Hmmm, will they draw a thousand times more power too? Will my laptop battery now last 0.12 minutes?





edit on 11-9-2011 by dainoyfb because: I typo'd.



posted on Sep, 11 2011 @ 01:07 PM
link   
reply to post by dainoyfb
 


Good point. I thought about that, too. Plus the heat buildup inside the box will have to be dealt with.



posted on Sep, 11 2011 @ 01:28 PM
link   
I should put my previous comment into perspective. This is obviously a progressive development but it will have significant limits as yet not specified, especially in mobile devices.



posted on Sep, 11 2011 @ 02:06 PM
link   
So stack 2 chips, I'm ok with my battery life going down by 1/3 or more. Heck, for that speed I won't ever unplug it. Just get an AC converter for your car, and your wall charger works in the car!

I never understood why people buy car chargers for the price. They are stupdily cheap to make, not worth the $20+ you pay. Just get a single AC converter and you have a car charger for life.



posted on Sep, 11 2011 @ 02:35 PM
link   
**************************
* You have think out of the box! *
**************************

Popcorn maker / Cell phone

Personal espresso machine / Netbook

Portable arc welder / Lap top

Home water heater / Natural gas well frac modelling supercomputer


edit on 11-9-2011 by BRAVO949 because: (no reason given)



posted on Sep, 11 2011 @ 02:39 PM
link   

Originally posted by dainoyfb
Hmmm, will they draw a thousand times more power too? Will my laptop battery now last 0.12 minutes?





edit on 11-9-2011 by dainoyfb because: I typo'd.


thanks for ruining my college papers and making my keyboard sticky


i laughed so FREAKING hard




star for you.

and SNF for op.


still giggling



posted on Sep, 11 2011 @ 04:21 PM
link   
reply to post by N3k9Ni
 


I have been expecting something like that for some years, because some years ago I thought that was the logical evolution of the circuit board/printed circuit board/chip system.

As for the heat produced, there's one thing that, apparently, most people are not thinking about: until now, the cooler is only touching the top of the chip, but with a higher chip they can make a cooling system that also uses the sides of the chip.

But that is only needed if the chips consume more power, and with this method they can make shorter (vertical, from layer to layer) connections, with less heat being produced.



posted on Sep, 11 2011 @ 06:13 PM
link   
3d chip? Skynet anyone? Reminds me of the 3d terminator chip from T2. As far as it using 1000 times more energy and creating more heat. The reason our current 2d chips put out so much heat is because of how they are clocked and the amount of energy put into them. With this 3d tech you could design the clocks, chips, ect in such a way that it would actually use less power and generate less heat. The glue also dissipates heat which will make this possible.



posted on Sep, 11 2011 @ 06:21 PM
link   
In reality it would be chips glued together, but a diamond to copper heat spreader between each die, all put into a fluid filled heat sink/interphase connector. Stacking 100 chips into a silicon cube would deep fry the middle processors at the 100W/CM2 heat dissipation. One good idea would be to directly attach RAM to the chip stack allowing for much faster dynamic processing.
edit on 11-9-2011 by eywadevotee because: (no reason given)



posted on Sep, 11 2011 @ 07:24 PM
link   
Obviously, you wouldn't stack 125 watt TDP cores onto each other.

The reason processors these days are so energy hungry is because of the ridiculously high switching speeds and high leakage gates that come from the ever-shrinking transistor construction and the low switching times necessary.

With this capability, you could use larger fabrication processes to create lower-leak gates with much less demanding clock speeds and place dozens of them on top of each other. The end result is a much cooler processor that has a hundred or more physical cores and in-package cache-ram in the range of gigabytes with almost non-existent latencies by comparison to today's applications.

This will mesh nicely with the coming switch from CISC primary processing architectures to massively parallel RISC as the primary processing architecture. Already, your GPU has far more processing power than your CPU with far superior linear scaling. Underclocking GPU cores and dialing back their thermal dissipation with better SOI while stacking them with cache-ram using this type of procedure will lead to some absolutely mind boggling performance.



posted on Sep, 12 2011 @ 04:59 AM
link   

much less demanding clock speeds

Unfortunately a great part of algorithms are not parallelizable. And this is where every Hz counts. In fact massive parallel processing is pretty much restricted to numerical simulation, rendering(graphics) and a few search algorithms.


coming switch from CISC primary processing architectures to massively parallel RISC as the primary processing architecture

Not sure what you mean with primary processing architectures. But for its use cases massively parallel hardware has been available for some time already. DSPs and more recently programmable GPUs. But i don't see CPUs vanishing. You wont run a operating system on a GPU.


GPU has far more processing power than your CPU with far superior linear scaling

Only for parallelizable problems. A single processing unit with clockspeed * n will always be faster than n processing units.



posted on Sep, 12 2011 @ 09:48 AM
link   
Wow, thats amazing!
Gluing cpus together, heh i foresee skynet in the future, like another mentioned.



posted on Sep, 12 2011 @ 11:07 AM
link   
I remember when they were using liquid nitrogen to cool down the most advanced processors.



posted on Sep, 12 2011 @ 03:03 PM
link   
reply to post by moebius
 



Unfortunately a great part of algorithms are not parallelizable. And this is where every Hz counts. In fact massive parallel processing is pretty much restricted to numerical simulation, rendering(graphics) and a few search algorithms.


This is a series-parallel argument, here. A parallel architecture can operate on series instructions - an arithmetic unit will perform an operation and pass it off into the cache for another (perhaps specialized) unit to operate on it before updating the value in the cache. It might, technically, be slower than a single complex instruction architecture - but you can operate on hundreds of the things at any given time.

You're also only limited by TDP - so a few high-clock wafers can be added into the mix to handle the clock-demanding operations. You would, of course, sacrifice operations and features that are not necessary when you have parallel architectures handling those.


Not sure what you mean with primary processing architectures. But for its use cases massively parallel hardware has been available for some time already. DSPs and more recently programmable GPUs. But i don't see CPUs vanishing. You wont run a operating system on a GPU.


CPUs won't vanish - but their role is changing. The floating point performance of a single GPU is orders of magnitude greater than that of a processor from the 'same generation' (difficult to establish market generations, there - but a top of the line CPU is only going to be spanked by a mediocre graphics card two generations prior).

The largest shift we will see is in database servers, which benefit greatly from the parallel architectures (which have only been widely commercially available for about five years with a lot of competition between proprietary and open source standards). At least at first. The gaming community will also see architectures like Bulldozer appear and standards like OpenCL and PhysX begin taking advantage of the floating-point performance of on-die GPUs (that will not be used for graphics by gamers).

My father thought it was pretty funny, before he died, that they were moving floating-point processors back onto the CPU, as he recalled when they took them off of the CPU following the 486 era.

It's also interesting how the RISC/CISC has evolved. CISC has become a competition of clock-speeds to maintain and improve performance, whereas RISC has become an issue of parallel, low-clock speeds to maintain and improve performance. It's opposite of the original way RISC and CISC were competing (RISC taking advantage of higher clock speeds to perform smaller operations faster and CISC having lower clocks, but being able to perform more complex operations in a single clock).

Both will be with us for a long time to come. It just makes no sense to design your program to run on a bottle-necked architecture. CISC will never reach above 5 GHz - but RISC can see into the billions of "stream processors" (however the architecture they use defines them) on each of four cards in each of a dozen computers networked together.

Bring on 512-bit AES block cypher encryption.



posted on Sep, 12 2011 @ 03:10 PM
link   

Originally posted by dainoyfb
Hmmm, will they draw a thousand times more power too? Will my laptop battery now last 0.12 minutes?





edit on 11-9-2011 by dainoyfb because: I typo'd.


Chips are becoming more efficient every year.

The chip I'm typing on my iPad with requires up to one half a watt.

My desktop chip is a sandy bridge i3 which has a tdp of 35w but rarely goes over 15w.

You could design a chip with less than 10 tdp, stack five together, and be a bit faster than what's currently out.

Just need to tinker with this new tech to find the optimum tdp to performance ratio.

As a node for supercomputing, or render farms, it would be a huge improvement, as all other components would be shared, ie less power consumption.
edit on 12-9-2011 by unityemissions because: (no reason given)



posted on Sep, 12 2011 @ 03:42 PM
link   
This is a great step forward, will mean every aspect of a motherboard can be integrated into one place - on top of the i7's CPU/Cache/Memory controller/graphics you can add networking, SATA/SCSI control, USB, bluetooth, sound and much more. It would consume much less power as everything would be on one chip meaning there would be less power loss through heat, losses over connections/tracks/resistance and data connection speeds between components will be massively increased as they will be in the same silicon tower.

I agree with what others say, bring back AMD!


I remember their Cyrix processors blew Intel's Pentium out of the water and at half the cost, those were the good old days!



posted on Sep, 12 2011 @ 03:58 PM
link   
reply to post by naycalvert
 


Cyrix processors weren't amd, were they?

I had one and it melted. Proly why it was faster...they were pushing it too hard for the available tech of the time.



posted on Sep, 12 2011 @ 06:04 PM
link   

Originally posted by unityemissions
Cyrix processors weren't amd, were they?
They weren't, Cyrix was an independent company that only designed the chips.

Their chips were faster on some things than Intel chips, but were slower in others, so the difference was not big enough.



posted on Sep, 12 2011 @ 06:34 PM
link   
They're wasting their time and money. I've seen the answer. It is built. It's been built up to petaflop processing power and it's the size of a VCR. It's already passed a technical audit with AT&T and Verizon because of our plan...that of course I'm not at liberty to talk about yet in detail.

It's coming but not on our time schedule. It's kinda messy at the top...

Hints? No moving parts.




top topics



 
14
<<   2 >>

log in

join