It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Intel Shows 48-core x86 Processor

page: 1
1
<<   2 >>

log in

join
share:

posted on Dec, 2 2009 @ 08:34 PM
link   
"Intel unveiled a completely new processor design today the company is dubbing the "Single-chip Cloud Computer" (but was previously codenamed Bangalore). Justin Rattner, the company's CTO, discussed the new product at a press event in Santa Clara and revealed some interesting information about the goals and design of the new CPU."
www.pcper.com...

For the computer folks out there, this thing is amazing. It can behave as 24 dual core processors with different operating frequencies and will use between 25 and 125 watts. It is a small cluster on a single chip. Imagine a unit with 1,000 or more of these.




posted on Dec, 2 2009 @ 08:43 PM
link   
reply to post by pteridine
 


Sweet..!



Imagine a unit with 1,000 or more of these.



Actually, I can't...I'm micro when it comes to this.




From Source:


The chip's massive parallelism gives us the ability to investigate, today, the degree of parallelism that will be needed from applications five years down the line to make the best use of emerging many-core platforms." – David Andersen, assistant professor of Computer Science at Carnegie Mellon University





Can you give me an example as to why this is awesome, as I so clearly stated in the begining of this post..? Obviously (I think) this is something cool, but I don't get how cool...


Appreciate it...




posted on Dec, 2 2009 @ 08:50 PM
link   
reply to post by happygolucky
 


Put it this way - the computer you are using almost defiantly has eith 1 or 2 cores to it's processor... Just say it has 1, one rated at 2.0Ghz (32bit)... That means at full tilt it can process 32 bits of data once every cycle, and it has 2,000 (edit 2.000 million) cycles a second... shoving through a max of 64,000 (edit 64,000,000,000) bit's per second (under perfect conditions)...

Simply put, the more cores the more processing power, the more processing power the more stable and more fantastic you can make the applications, or the bigger the problems you can solve...

That chip is basically a super computer from about 10 or 15 years ago on a single chip!

edit Forgot the million


[edit on 2/12/2009 by Now_Then]



posted on Dec, 2 2009 @ 08:57 PM
link   
reply to post by happygolucky
 


From the energy standpoint it is exceptionally efficient. This means that the the limiting factor in many systems, the availability and cost of cooling, will be lessened. Power out is also pretty much power in, so we will have either more computer for the power or require much less power for the computational assets we need.
From the parallel processing standpoint, it is also an advantage. For a program that may only need 20-50 processors at a time, such as VASP, used by molecular modelers, one chip will make a nice workstation. There is also the advantage of rapid communication between processors because of the short path between them.



posted on Dec, 2 2009 @ 08:57 PM
link   
OMG. Just. Wow!!!

My first computer was a 386DX 25 MHZ with 1 MEGABYTE of ram and a 120 MEGABYTE Hard Drive!


I now have a comp that I put together myself - Dual-core AMD 64 bit at 3.1 GHZ. I can NOT imagine more than a quad core. Jeez!

And isn't it interesting how we stopped at about the high 3GHZ range? I know it's more about the multicore these days, but is it possible to go past 4 GHZ?

Sorry. I love computers - ALOT. I just get amped up when I hear things like this.

I did read somewhere that (I THINK it's the US government - housing it at like MIT or something) has an underground conglomerate of servers with Intel Itaniums - something like 10,000 cores all combined (I'm off on the numbers, but it was indeed over 10,000)!



posted on Dec, 2 2009 @ 08:58 PM
link   
And a couple of years from now when this chip is on the market, microsoft will inform you that "windows 10" requires at least a terabyte to install. You need to upgrade!!!



posted on Dec, 2 2009 @ 09:07 PM
link   
reply to post by impaired
 


Red Storm and Jaguar come to mind but there are many other large parallel machines.

edit to add, top 500 computers --- www.top500.org...

@ someoldguy -- No, they won't. First they will come out with VistaMe which will reqiure most of the processor to load in less than 10 minutes. After everyone is desperate to be rid of VistaMe, they will come out with windows 10 and charge for it. They may even charge by numbers of cores....both times. The PC-Mac commercials will be even better.



[edit on 12/2/2009 by pteridine]



posted on Dec, 2 2009 @ 09:08 PM
link   

Originally posted by Now_Then
reply to post by happygolucky
 


Put it this way - the computer you are using almost defiantly has eith 1 or 2 cores to it's processor... Just say it has 1, one rated at 2.0Ghz (32bit)... That means at full tilt it can process 32 bits of data once every cycle, and it has 2,000 cycles a second... shoving through a max of 64,000 bit's per second (under perfect conditions)...

Simply put, the more cores the more processing power, the more processing power the more stable and more fantastic you can make the applications, or the bigger the problems you can solve...

That chip is basically a super computer from about 10 or 15 years ago on a single chip!


clearly you dont know computers

its really hard to make a software to use all of those cores ...

this will be better for servers applications ... but really, for a desktop user, it shouldnt be useful nowadays ....since, like I said, its hard to make an application that can work within multiple cores and with everything syncronized

imagine that you will have a lot of processors at @ 3ghz ... the problem is that, YOU WONT MAKE a + b = c FASTER, since it is made in one core, ... to use this c in another operation, you will have to wait for this core to do it ... so really, you will have to make a software that can have kind of different softwares within it running at the same time ... its hard to share data between them .. so really, its hard



posted on Dec, 2 2009 @ 09:11 PM
link   
Ok. Found some info. I hope I'm not polluting your thread. Please let me know.

All from Wikipedia:


A Cray XT5 system, Jaguar has 224,256 Opteron processor cores, and operated with a version of Linux.


Actually, there's so many of these super computers, I'll just add one more because this is just so awesome:


On the Top500 list, Roadrunner is said to have 122,400 cores. It is important to know which core is counted.

* 13,824 Opteron cores + 116,640 Cell cores = 130,464 cores for both the computing nodes and the operation nodes.

This is a number larger than the one mentioned on Top500. It turns out that the Roadrunner only used 17 Connected Units while doing the LINPACK benchmark, and it was not counting the cores in the operations and communication nodes (they didn't run the benchmark).[14]

* 6,120 Opteron (2 cores) + 12,240 PowerXCell 8i (9 cores) = 122,400 cores




en.wikipedia.org...



posted on Dec, 2 2009 @ 09:16 PM
link   
If your not rendering out images why would people need so much fire power in a single chip?



posted on Dec, 2 2009 @ 09:44 PM
link   
So how well does it scale? How many cores it has is somewhat irrelevant unless multiple cores can share the workload on any given process (especially in the desktop enviroment). Otherwise you'll be sitting there with your 48 cores running an app that can access only one core, the other 47 doing nothing unless you start an additional process. Hell even the most demanding of games these days can barely work well with multiple cores due to the scalability, and others must revert to single core operation. Imagine a Ferrari running on 1 cylinder. I suppose this is great if you plan on running 48 different virtual machines, apart from that, whoopee.



[edit on 2-12-2009 by fumanchu]



posted on Dec, 2 2009 @ 09:47 PM
link   
reply to post by pteridine
 


Sounds great but what's the catch?
How much processing power do you get out of each of those 48 cores?
This sounds great for server farms to spit out webpages for Google.
For a supercomputer like a Cray i see poor performance.
How many of those chips will give me 2 Petaflops?
We will see.



posted on Dec, 2 2009 @ 10:09 PM
link   

Originally posted by Now_Then
Simply put, the more cores the more processing power, the more processing power the more stable and more fantastic you can make the applications, or the bigger the problems you can solve...


That's true in theory. Unfortunately, alot of programmers are scared of threading issues. Parallelizing algorthims for high concurrency isn't always easy either.



posted on Dec, 2 2009 @ 10:09 PM
link   

Originally posted by Faiol
clearly you dont know computers

its really hard to make a software to use all of those cores ...

this will be better for servers applications ... but really, for a desktop user, it shouldnt be useful nowadays ....since, like I said, its hard to make an application that can work within multiple cores and with everything syncronized


What are you on about?? I was just giving an explanation in very simple terms... I didn't say anything about desk top applications... And any way applications for technology develop to meet the hardware available... Your post stinks of trolling and that's a real detriment to this site...

Edit: my maths on the clock rate was out by an order tho


[edit on 2/12/2009 by Now_Then]



posted on Dec, 2 2009 @ 10:11 PM
link   
Oh hell...


DAMN TERMINATOR!! YOU SCARY!!!



posted on Dec, 2 2009 @ 10:13 PM
link   
I'm waiting for nanoprocessors (either mechanical or quantum) to come into being in the future. But since that isn't happening any time in 10 years, I'll take this. Before I got my new Dell quad core (it has 8 cores on it, I thought it was only 4 until I looked at the specs on the system). I had a single core Vaio desktop, it was top of the line 5 years ago and I was able to do everything with it. What I noticed over time was that my CPU couldn't handle the new software anymore (CPU was tapping out and staying at 100% utility), and it was having difficulty trying to handle new video formats (even with the new patches). Having more cores enable you to handle more operations at one time, also if you are running a special program that uses a multiple applications in one program (video rendering, HD graphics, variable associations with a number of inputs etc.) Then having a multiple CORE system would help you. Also back in the day software couldn't keep up with hardware. Now the hardware is playing catchup with software. You see as I stated at the beginning about nano-processors, that will eventually have to happen, because there is a limit to how small our hardware can make computer processors, but with software it has been growing exponentially and will continue as long as people can think of novel ideas.



posted on Dec, 2 2009 @ 10:19 PM
link   
reply to post by impaired
 


Impaired you are indeed in understanding computers.

A 'computer' is a complete machine, a 'CPU' is one part.

put it this way.... That 200K+ core computer has at the littlest 50000 CPUs.

This ONE CPU, has 48 cores.

Do the math.... roughly 4166 of these new Intel CPUs would power that 50000CPU behemoth.



posted on Dec, 2 2009 @ 11:37 PM
link   
reply to post by Revolution-2012
 


If you want to attack the bottlenecks, we need to switch to
an optoelectronic computer. The aliens must be laughing
at our energy hog server farms that are gobbling up megawatts
of electrical power.



posted on Dec, 3 2009 @ 04:35 AM
link   
Neat - although I am not entirely surprised. Each core is probably very slow, but I'm sure it will be extremely fast for some specific tasks.

Also, remember that newer videocards often have thousands of "cores" aka steam processors.


I know it's more about the multicore these days, but is it possible to go past 4 GHZ?

Modern computers become extremely inefficient at scaling past 4ghz. You need much more power to do it, and for a limited gain. It's just more efficient to add architectural improvements and add cores.

For example, a Core 2 Duo at 1.86ghz will be faster than a Pentium-D at 3.6ghz despite both being manufactured in the same process. Both are dual core. However, the Core 2 duo will consume half the power. That alone was done by architectural improvements. And a Core i7 at 2.66ghz will be faster than a Core 2 Quad at 3.2ghz.

4ghz is only possible by overclocking. It's actually pretty easy for many modern processors to achieve - like the Core i7 940 and above. Some iterations of the Core 2 duo (and Quad), Pentium 4, and i7 can also do it. It takes a fair bit more heat though, and can often need tweaking to remain at 4ghz after a few months as the CPU slowly degrades like it does in any PC.


(it has 8 cores on it, I thought it was only 4 until I looked at the specs on the system).

You are likely have have a Core i7 with hyperthreading. That means you have 8 logical cores, but 4 physical cores. You still have 4 cores.

That, or you bought a skulltrail system with two quad cores.

reply to post by Revolution-2012
 

Remember nvidialovetard,

ATi > Nvidia.


lol jks.

[edit on 3/12/2009 by C0bzz]



posted on Dec, 3 2009 @ 11:00 AM
link   
reply to post by impaired
 


The primary reason that our current processor frequencies are limited is because of the intense heat that is generated in the core. Basically (and someone feel free to correct me if I am wrong) a machine running at above 4 ghz would basically need a water cooling system to even be kind of functional. Even then, it will still get very, very hot.



new topics

top topics



 
1
<<   2 >>

log in

join