When will the core wars stop?

Joseph F

Diamond Member
Jul 12, 2010
3,522
2
0
At which point do you think that Intel and AMD will stop focusing in adding more cores and start focusing exclusively on IPC and clockrate improvements in their processors? I think for consumer products it will probably be around 64-128 cores. For servers probably around 256 cores.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Focusing from one diminishing return, to another, isn't really a good idea. Manufacturers went multi-cores because its much harder to increase clock speed and per clock performance.
 

Joseph F

Diamond Member
Jul 12, 2010
3,522
2
0
http://en.wikipedia.org/wiki/Amdahl%27s_law
This is why I ask: According to Amdahl's law, if 95% of a program can be parallelized you won't get any performance boost at all if you have more than 20 cores. And I seriously doubt that programmers could optimize many programs (save for things like video/image encoding/editing) to take advantage of more than about 128 cores.
 

Borealis7

Platinum Member
Oct 19, 2006
2,901
205
106
they way i see it, in the desktop market it wont stop in the near future (6-10 years at least), or until we get a high-end GPU integrated on the CPU. instead of adding more CPU cores, we'll just have more GPU cores on die.
 

Soulkeeper

Diamond Member
Nov 23, 2001
6,732
155
106
"This is why I ask: According to Amdahl's law, if 95% of a program can be parallelized you won't get any performance boost at all if you have more than 20 cores."

^^ this assumes that you are only running 1 program
typical systems can have hundreds of programs running concurrently, fighting for cpu cycles.
We may never have "enough" cores for a server for example ....

I suppose the next big thing could/will be specialization of cores/modules, integration of gpu and other stuff, etc. (insert marketing term here).
Rather than relying on only core count, ipc, or clock. They are tweaking each along the way.
 

Bateluer

Lifer
Jun 23, 2001
27,730
8
0
Guys, when they've max'd out the core wars, hitting diminishing returns for the cost of the silicon, there will something else to drive up performance of a CPU that will pit the major players against each other in an effort to earn your dollar.
 

artvscommerce

Golden Member
Jul 27, 2010
1,144
17
81
i'd be interested in the opinion from someone who has a lot of experience programming code in parallel. I could see the core counts just getting higher and higher, but I would guess that it would reach many temporary plateaus if programmers were not able to keep up.
 

Edrick

Golden Member
Feb 18, 2010
1,939
230
106
i'd be interested in the opinion from someone who has a lot of experience programming code in parallel. I could see the core counts just getting higher and higher, but I would guess that it would reach many temporary plateaus if programmers were not able to keep up.

As a developer, this is something that we discuss from time to time in our architecture meetings. From a coding point of view, it gets much harder to manage the threads as the thread count goes up. The code gets more complex and code optimizations become more important. Mis-managing threads inside the code can cause huge performance penalties (wait states) which can compund each other. So looking at a budget standpoint, the cost of multi-threading application does go up.

For any single application, we will reach a point where adding more threading is just not feasable. The code will become too complex that all the junior/intermediate developers will not be able to follow it and performance gains will diminish.

So for desktop systems, I do see a point where more cores will simply be a waste. But for servers, I do not see a limit within sight as a lot of our servers house multiple applications.
 

formulav8

Diamond Member
Sep 18, 2000
7,004
522
126
Well AMD has on their main page a message: "Why More Cores Matter". So AMD still has the multi-core bug.
 

A5

Diamond Member
Jun 9, 2000
4,902
5
81
Pretty much every presentation I've seen says the future lies in heterogeneous CPU architectures. Intel is already starting down this path with the Quick Sync stuff in Sandy Bridge.

While Larabee will probably never launch as a stand-alone product, expect to see the same idea integrated on a future CPU.
 

Joseph F

Diamond Member
Jul 12, 2010
3,522
2
0
That would be pretty awesome to see a motherboard with a Regular CPU socket and a Larrabee co-processor socket IMO. (I know that's not really what you are saying but I'm just kinda rambling on after staying up all night.)
 

cantholdanymore

Senior member
Mar 20, 2011
447
0
76
they way i see it, in the desktop market it wont stop in the near future (6-10 years at least), or until we get a high-end GPU integrated on the CPU. instead of adding more CPU cores, we'll just have more GPU cores on die.

Cell processor anyone ??
 

BD231

Lifer
Feb 26, 2001
10,568
138
106
As long as we still have to sit around and wait for video to encode, decode, transcode for more than five minutes no matter what the task there will never be enough cores.
 

Texashiker

Lifer
Dec 18, 2010
18,811
198
106
At which point do you think that Intel and AMD will stop focusing in adding more cores and start focusing exclusively on IPC and clockrate improvements in their processors? I think for consumer products it will probably be around 64-128 cores. For servers probably around 256 cores.

I asked pretty much this same question in another forum, and was beat down for it.

But I am not going to do that.

AMD and Intel have reached the maximum clock speed with current cpu design. Its going to take a total change in cpu technology to improve speed, something that neither company can do right now.

Even though we have some new designs, the same ole barriers are still in place - the speed of light, and the amount of heat generated.

I think this article is related to your question, "Has Moore's Law finally hit the wall?" - http://www.zdnet.com/blog/storage/has-moores-law-finally-hit-the-wall/1195

We are not going to see the core wars stop, until there is a total redesign in cpu technology.
 

Edrick

Golden Member
Feb 18, 2010
1,939
230
106
When CPUs move off of silicon onto something new (like graphene), then clocks will increase rapidly. Until then, we are only going to see very moderate increases in clock speeds with die shrinks.

In February 2010, researchers at IBM reported that they have been able to create graphene transistors with an on and off rate of 100 gigahertz, far exceeding the rates of previous attempts, and exceeding the speed of silicon transistors with an equal gate length. The 240 nm graphene transistors made at IBM were made using extant silicon-manufacturing equipment, meaning that for the first time graphene transistors are a conceivable—though still fanciful—replacement for silicon.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
At which point do you think that Intel and AMD will stop focusing in adding more cores and start focusing exclusively on IPC and clockrate improvements in their processors? I think for consumer products it will probably be around 64-128 cores. For servers probably around 256 cores.
When CPUs are made of very different materials, which are currently too expensive for mass production. Intel and AMD might be able to increase IPC more than they do, but not by all that much, using the same amount of resources it takes to add more CPU cores.

Unless we get another method of making processors cheap enough, the only options are more general-purpose cores, and special-purpose coprocessors. If we ditch languages that use ancient memory models (if side effects don't take conscious effort to create, it's a crappy memory model), or modify those capable to use better ones, more cores will get used better, and it will become easier to use those coprocessors. Even so, there's a limit, and at some point, we'll be stuck, on a per-application basis.

Cell processor anyone ??
Um, no. The Cell is not fit for PCs. Quite frankly, it isn't fit a gaming system, either, and Sony ended up having to add a GPU at the last minute. Many weaker processors next to your main processor, yes, but not like the Cell.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
"This is why I ask: According to Amdahl's law, if 95% of a program can be parallelized you won't get any performance boost at all if you have more than 20 cores."

^^ this assumes that you are only running 1 program
No, that just assumes a completely wrong understanding of Amdahl's law. The maximum speedup is 20x (obviously you can never get rid of the 5% time that have to be executed by exactly one CPU), NOT that only 20 cores will give a speedup. Those two are completely different statements. Although there are many algorithms that don't scale at all or to more than a few dozen cores, Amdahl's law has nothing to do with that.

Other than that: If Intel/Amd would know what else to do with their transistors they surely would love to do that, but there's hardly an alternative for more cores. At least for clients who don't run dozens different programs at the same time, more cores won't do good for long (that is, if you don't spend all your time encoding videos..)
 

LiuKangBakinPie

Diamond Member
Jan 31, 2011
3,903
0
0
the x86 core wars could end by 2013. What war is going to start and if not already started the integration of the I/0 and such as Ethernet etc etc
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
Pretty much every presentation I've seen says the future lies in heterogeneous CPU architectures. Intel is already starting down this path with the Quick Sync stuff in Sandy Bridge.

While Larabee will probably never launch as a stand-alone product, expect to see the same idea integrated on a future CPU.

Yea, I think as transistor budgets explode we're going to see an increase in dedicated hardware for common tasks. I just hope AMD and Intel can work together enough to create standards. Otherwise...
 

Syzygies

Senior member
Mar 7, 2008
229
0
0
i'd be interested in the opinion from someone who has a lot of experience programming code in parallel. I could see the core counts just getting higher and higher, but I would guess that it would reach many temporary plateaus if programmers were not able to keep up.

I'll take all the cores I can get.

I build Linux boxes exclusively as compute servers for parallel math computations coded in Haskell.

A good read on the issues one faces with parallelism is The Art of Multiprocessor Programming. With more cores, syncing all those caches and memory traffic becomes a bottleneck. It is amazing what has to happen at the hardware level for this all to work.

Haskell, being a functional programming language, treats most memory as read-only (after one computation to set its value), so there is much less invalidating of cache memory. There's a constant efficiency loss, but I can often parallelize a program by adding a mere handful of lines of code, and it then scales linearly to all the cores I can get.

Perhaps this applies to only 95%, with 5% holding out. That's a small job, and computer scientists prefer to think in terms of rates of growth. Make the job larger, and that 95% becomes 99.9%, and we're craving more than 20 cores.

Haskell is pretty hard to learn, but the problem of using all these cores is also hard, and Haskell yields the unique cleanest solution, for now. I know a couple dozen programming languages, and if there were a better choice I'd be using it. For example, Erlang messages take a huge efficiency hit; its parallelism is designed not for speed, but so an avalanche can take out a whole village of Scandinavian servers, with the sys admins sleeping through the night while the rest of the network automatically repairs itself.

Functional programming is destined to become the norm, as we cope with multiple cores. It will take a long time, with lots of kicking and screaming, as programming languages are like religions. These aren't reborn Lispers preaching what they'd like to believe. This comes down to the need for read-only memory to avoid cache thrashing, and is begrudgingly acknowledged by people who aren't pleased by this news.

The virtual cores of Core i7 are a second order benefit, a scheduling convenience. It used to be that using all cores would slow down a parallel computation, as the odd other running jobs held back one core. Now, using 5, 6, or 7 of the 8 virtual cores of a Core-i7 2600K is pretty much the same throughput, with the effect of cleanly using all 4 physical cores with no scheduling slowdown. In other words, where I used to use 3 cores of a Q6600, I now effectively use all 4 cores of a 2600K.