Originally posted by: Viditor
Originally posted by: Kuzi
Notice how most desktop applications (not synthetic) today don?t use more than two cores, and even the ones that do make use of +2 cores, only get a minor boost in performance (going from 2 to 4 cores), this is generally speaking of course. So the more cores you have it becomes harder and harder to make use of the extra cores, how many programs are out right now that give 4x performance from 4 cores vs a single core.
As to Hyperthreading in Nehalem (8x threads), my guess is that the increase in performance will be minimal for two reasons. First it will never be as effective as having 8 real cores, and second is what I mentioned above, that in general most applications today don?t get much benefit even from 4 cores. On the server side it might be very different and my guess Nehalem with Hyperthreading will show it?s true strength there.
As to AMD my guess is that they plan to have two quad-core processors on the same die (non-native) what Intel has been doing with their CPUs for a while. You know it?s cheaper and easier to do, but even at 45nm the die size would be too big, so maybe they need to wait till 32nm to do that. Good luck AMD.
I agree completely about hyperthreading...in fact, I don't know of a good reason for Intel to be bringing it back (unless it helps with the CSI interface in some way).
As to AMD's upcoming 8 and 16 cores CPUs, I don't know...
What we know so far is:
1. They are part of the fusion project in that they will be designed for modularity. This lends credence to your predicition of an MCM in that with all of the different variables (xCPU + xGPU = 8 cores), it would be VERY expensive to design and produce all of them...
2. On the other hand, we also know that they will utilize DC architecture and have a crossbar switch. This has always been a feature of monolithic design only (in fact I don't know how you could do it with an MCM). There's also the problem of how you deal with the on-die memory controller in an MCM...
BTW, while an MCM is cheaper from a yield standpoint, it's not necessarily cheaper overall (they are very expensive to design, which is why AMD doesn't have one).
My understanding is that Nehalem was designed by the recycled Prescot team in Oregon. So injecting hyperthreading is something those guys would be expected to know how to do versus the Israeli design team.
Also hyperthreading need not incur a performance penalty just because >1/core is involved. IBM and SUN both have many products in the market which have >1thread/core and they do not incur a thread level performance penalty. It comes down to design of course, and tradeoffs. Just saying assuming a priori that Nehalem's SMT will be sub-par is not an assumption I'm comfortable making.
Why do hyperthreading? Well you do save die space and TDP on a normalized thread level. If you can get 50% the benefit of having a second thread on one core versus a dedicated core, but at only the cost of 15% increase in die size and 20% increase in power consumption then that is a win-win.
Granted if you can't invoke SMT without the performance/die-size or performance/watt paying dividends then you shouldn't bother.
Why do it for desktop? Well even if desktop apps don't use >2 threads you still can make cheaper chips selling "dual-thread" single core chips (smaller die, less heat, etc).
So I still see this all falling into the bucket of "if you can do SMT for a net gain, why wouldn't you?". To me this is a QED on why do SMT. The question is what does AMD do in response. The arguments above remind me of the early days when the AMD camp was rallying themselves around the "monolithic quad-core will be teh uber, MCM FTL!".
Just because you can do hyperthreading badly and you can make it worse than not having it at all doesn't mean that is a forgone conclusion. Intel showed MCM was not the end of the world, IBM and SUN have already shown SMT is not the end of the world. So what is AMD got cooking to deal with it? Hopefully not another PR campaign of "our threads are pure! none of this so-called resource sharing will taint the purity of our native thread processors."
