It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Intel Core i9 CPU Coming in 2017

page: 2
8
<< 1   >>

log in

join
share:

posted on May, 20 2017 @ 04:33 PM
link   
a reply to: stormcell

OH I know when you want to seriously take it to the max the numbers get silly but normally thats in the business end of the scale and at that point you know things are serious when you're paying for AC and 3 phase power supplies etc, the real fun will be when they release then next gen Xeons as thats when the s--- does get serious.



posted on May, 20 2017 @ 05:47 PM
link   

originally posted by: Echo007
High end LGA2066 processors are going to be very expensive. LGA2066 is pointless, if all you do is surf the web and play video games. If only 1% of the user base has 12 cores, what game developer is going to waste time programing the game to take advantage of all the extra cores. If you do video encoding or photo editing professionally, i could see going with LGA2066.

You could build a new system that includes new MOBO, RAM, CPU, GPU and PSU for the cost of Intel high end LGA2066 CPU.


Here's the problem with adding more cores. When software runs, it either runs in serial or parallel. When in serial, each task is taken one after the next. When in parallel, multiple threads each compute their own tasks simultaneously. However, not every task can be calculated simultaneously, some core logic to just about anything (there are exceptions, such as decryption) must be done in serial. Your resulting gain from adding more cores therefore is serial + (parallel/cores).

So for example if you have 2 cores and 90% of your code can be made to run in parallel your software in a dual core system will complete a task in 10 + (90 / 2) or 55% of the time a single core machine can complete it. That's a pretty big boost, but as the number of cores increase, the gain becomes less and less. A 4 core machine in similar circumstances will complete a task in 32.5% of the time a single core machine will do it. 1 core = 100%, 2 core = 55%, 4 core = 32.5%. While that first core nearly halved your run time, it took 2 more to get 2/3 as much benefit.

You can scale this concept out too. A 12 core system in the same 10% serial setup will complete the task in 10 + (90 / 12) or 17.5% of the time. You have to jump from 4 cores to 12 in order to halve runtime. From that point it's not even possible to halve it again. Jumping up to 60 cores will get you to 11 + (90 / 60) or 12.5%. 12 to 60 cores is only a 1/3 decrease in runtime.

In reality, given how much of a task can typically parallelized in home/office use, there's little to no benefit in going beyond 4 cores. Xeons are built more to be a server processor, an area in which having multiple threads lets you scale to more concurrent users. That has some business value for certain applications, but it's not the sort of thing you would want to give someone as their desktop PC, even if they were using very demanding hardware.

But, to ask your question since I am a game developer. CPU's have basically reached a point where more isn't better. Most of the demanding work has been moved off to GPU's. The only real game application faster CPU's have at this point is that with faster CPU's you can use trig operations like Sin and Cos a lot more freely (these are very expensive operations, and under normal circumstances must be used sparingly), which in turn lowers the math requirements for game developers.



posted on May, 20 2017 @ 08:06 PM
link   
My first computer was a TRS-80 Color Computer I bought around 1982. Four colors display and a blazing 1 MHz Motorola 6809 processor with 4K Ram. It supported a plugin cartridges, cassette tape storage and a 5.25 floppy disk drive. It had a nickname, Trash 80. It was a good system for its time fun to learn on.

That computer seemed faster than today's clunky Windows systems.
edit on 20-5-2017 by eManym because: (no reason given)



posted on May, 20 2017 @ 11:54 PM
link   

originally posted by: Aazadan

originally posted by: Echo007
High end LGA2066 processors are going to be very expensive. LGA2066 is pointless, if all you do is surf the web and play video games. If only 1% of the user base has 12 cores, what game developer is going to waste time programing the game to take advantage of all the extra cores. If you do video encoding or photo editing professionally, i could see going with LGA2066.

You could build a new system that includes new MOBO, RAM, CPU, GPU and PSU for the cost of Intel high end LGA2066 CPU.


Here's the problem with adding more cores. When software runs, it either runs in serial or parallel. When in serial, each task is taken one after the next. When in parallel, multiple threads each compute their own tasks simultaneously. However, not every task can be calculated simultaneously, some core logic to just about anything (there are exceptions, such as decryption) must be done in serial. Your resulting gain from adding more cores therefore is serial + (parallel/cores).

So for example if you have 2 cores and 90% of your code can be made to run in parallel your software in a dual core system will complete a task in 10 + (90 / 2) or 55% of the time a single core machine can complete it. That's a pretty big boost, but as the number of cores increase, the gain becomes less and less. A 4 core machine in similar circumstances will complete a task in 32.5% of the time a single core machine will do it. 1 core = 100%, 2 core = 55%, 4 core = 32.5%. While that first core nearly halved your run time, it took 2 more to get 2/3 as much benefit.

You can scale this concept out too. A 12 core system in the same 10% serial setup will complete the task in 10 + (90 / 12) or 17.5% of the time. You have to jump from 4 cores to 12 in order to halve runtime. From that point it's not even possible to halve it again. Jumping up to 60 cores will get you to 11 + (90 / 60) or 12.5%. 12 to 60 cores is only a 1/3 decrease in runtime.

In reality, given how much of a task can typically parallelized in home/office use, there's little to no benefit in going beyond 4 cores. Xeons are built more to be a server processor, an area in which having multiple threads lets you scale to more concurrent users. That has some business value for certain applications, but it's not the sort of thing you would want to give someone as their desktop PC, even if they were using very demanding hardware.

But, to ask your question since I am a game developer. CPU's have basically reached a point where more isn't better. Most of the demanding work has been moved off to GPU's. The only real game application faster CPU's have at this point is that with faster CPU's you can use trig operations like Sin and Cos a lot more freely (these are very expensive operations, and under normal circumstances must be used sparingly), which in turn lowers the math requirements for game developers.


Oh look, someone smart. I was hoping someone would make some sense.



posted on May, 21 2017 @ 01:20 AM
link   
This 12 cores processor will probably cost a thousand $. But you can find nice dual socket, ECC memory Supermicro brd on fleabay thanks to frequent server farm upgrade at ~$100, sometimes with mem and cpu included. Add 2x Intel Xeon X5675 3.06GHz Six Core Processor for $50 each. You can get ECC DDR3 registered stick at $10 per 4GB.

For ~$300 you endup with a nice 12x cores computer (box, psu, ... not included) and believe me, server grade components, even used and abused, are far more reliable than the new domestic garbage grade computer.



posted on May, 23 2017 @ 04:11 AM
link   
a reply to: Aazadan

Mostly true but it should be pointed out more strongly that it is highly specific on the code/application being run.
For regular applications, more cores means very little. Often it just means more system responsiveness under bloat, which is not really what users should be going for lol.


There are some applications that can dynamically add more parallel tasks while also keeping ontop of the book keeping tasks that have to run in a serial manner. So It isn't always the case that you get large scale performance drop. 3D rendering applications have commonly fallen into this category since they are running Monte Carlo like methods which by nature are often a case of firing n-threads many simulations at the same time with a bit of book keeping at the end. LuxRender for example, about 6 years ago was benchmarked on a 12 core machine, the turn over of speed started at about 9 threads if memory served me correctly, with the gradient being almost 100% improvement by adding more threads (I think it was like 95%).

As said though... for your average user, and even gaming enthusiast, this CPU wont make any difference
edit on 23-5-2017 by ErosA433 because: (no reason given)



posted on May, 30 2017 @ 12:25 PM
link   
EXTREME!!


Intel’s answer to AMD’s [chip...] is an 18-core, 36-thread monster microprocessor of its own, tailor-made for elite PC enthusiasts.

The Core i9 Extreme Edition i9-7980XE, what Intel calls the first teraflop desktop PC processor ever, will be priced at (gulp!) $1,999 when it ships later this year. In a slightly lower tier will be the meat of the Core i9 family: Core i9 X-series chips in 16-core, 14-core, 12-core, and 10-core versions, with prices climbing from $999 to $1,699. All of these new Skylake-based parts will offer improvements over their older Broadwell-E counterparts: 15 percent faster in single-threaded apps and 10 percent faster in multithreaded tasks, Intel says.

PCworld.com, May 30, 2017 - Intel's massive 18-core Core i9 chip starts a bloody battle for enthusiast PCs.

10-, 12-, 14-, 16-, and, 18-core Extreme Edition running a cool teraflop in a desktop! That is just wrong!

More's Law: Just add a couple more cores to the CPU! If you think 2K is too much, don't worry! Intel is also adding 3 new Core i7 and a quad-core i5 for those of us still riding our bikes to school.

Me? I'm holding out for a cool graphene tablet with terahertz network speed that looks like a piece of transparent copy paper and powered wirelessly just by my breathing! Then we'll be cooking with bacon.



posted on May, 30 2017 @ 03:26 PM
link   
I don't run a lot of parallel applications these days and my I7 easily handles virtual machines when I want a bit of windows compatiability. So its bit pointless for me to spend $1200 for a processor that consumes 2-3x the electricity of my I7. Perhaps if I was interested in AI it could come in handy but I simply don't like Intels architecture. Intel seems to be playing for specs rather than true delivered performance. I'd be more interested if Intel created smaller cpus like the arm and banged together 1000's of them on a single layered wafer (it was done years ago but swallowed by military). Then we could create a HAL 9000 to destroy manlind!



posted on May, 30 2017 @ 03:32 PM
link   
I need 2.



posted on May, 30 2017 @ 04:56 PM
link   

originally posted by: glend
I don't run a lot of parallel applications these days and my I7 easily handles virtual machines when I want a bit of windows compatiability. So its bit pointless for me to spend $1200 for a processor that consumes 2-3x the electricity of my I7. Perhaps if I was interested in AI it could come in handy but I simply don't like Intels architecture. Intel seems to be playing for specs rather than true delivered performance. I'd be more interested if Intel created smaller cpus like the arm and banged together 1000's of them on a single layered wafer (it was done years ago but swallowed by military). Then we could create a HAL 9000 to destroy manlind!


Like I was trying to get across earlier. We've basically hit the limit of what we can do with silicon based CPU's. There's a few small performance gains to be gotten still but for all practical purposes we've hit the speed limit. It won't be until DNA computers that we see another big performance gain in processors, excepting a mathematical breakthrough on the software side which is certainly possible.
edit on 30-5-2017 by Aazadan because: (no reason given)



posted on Jun, 7 2017 @ 09:16 PM
link   
12 cores and 24 threads means you can run 2 applications independently per core. Meaning the processor will can handle 48 applications independently, which is quite an improvement for business and server applications without mentioning virtual cores.

The problem with gaming and multi threaded processors is the threads can't communicate fast enough to be useful in an environment that thrives on the speed of instruction executions.
edit on 7-6-2017 by eManym because: (no reason given)



posted on Jun, 8 2017 @ 05:25 PM
link   

originally posted by: eManym
The problem with gaming and multi threaded processors is the threads can't communicate fast enough to be useful in an environment that thrives on the speed of instruction executions.


The speed that you execute instructions at has little impact on games. Much more often, your performance gains come from writing your code carefully and executing everything in 2n or n time, avoiding exponential operations.

More often than not, this comes down to building tables for things, and doing an O(1) lookup operation that's precalculated rather than something complex.




top topics



 
8
<< 1   >>

log in

join