Originally posted by: Foxery
FSB increases do not have a significant impact on Core2 performance. Intel mostly does this to keep the multipliers low, and to push an upgrade path for motherboard and RAM makers.
Nailed it!
Desktop segment has not really needed a FSB boost since we hit 800MHz (4x200MHz).
When was the last time a laptop cycle was based on increasing the FSB? You rarely upgrade a laptop, unlike desktops where the CPU can be readily upgraded so there is less "pressure" to ensure a laptop upgrade includes the mobo & ram makers.
Originally posted by: magreen
It's true that the high latency penalty of the Core architecture is its Achilles' Heel and would have been its downfall if not for the large caches and smart prefetchers.
But increasing the FSB doesn't seem to translate to more performance from every benchmark I've seen... e.g. the e4300 (800FSB) at 1.8GHz performed exactly like the e6300 (1066 FSB) would have performed at 1.8GHz. I'm not really sure why, come to think of it.
Server's and blades are a different story versus the desktop market as you can have 2-4 sockets and memory contention becomes a real issue for performance scaling as you add more and more cores onto the back of the same FSB even with those advanced pre-fetchers.
This is why, for instance, Skulltrail (2 sockets) operates with a dual-FSB architecture.
Originally posted by: maximal
My understanding is that Core architecture was always FSB starved, Intel realized this and built C/C2 architectures around large caches and efficient prefetchers to compensate for this FSB bottleneck. AMD on the other hand, equally realizing FSB as a major bottleneck to CPU performance, decided to eliminate FSB altogether. Arguably AMD was the wiser but failed to capitalize on this with its overall CPU architecture. Now Intel has seen the light and is abolishing FSB in its next architecture (Nehalem), which seems to be breaking new ground it terms of performance already (based on early previews).
AMD most certainly capitalized on it. The K7 core architecture was strong but the IMC strategy made the K8 a product with no equal for many many business quarters and helped propel AMD into the lucrative multi-socket server markets where those opteron 8xxx's sold for $2k+.
Where AMD lost ground was that while they were busy tweaking the IMC end of the business (AM2, DDR->DDR2, etc) they were not aggressive enough in improving the core side of things. This is where Intel caught-up and surpassed AMD when they transistioned to Core2.
To say "Intel saw the light" implies the company's decision makers were oblivious to the option of integrating the memory controller for decades past. This is obviously not the case, for Intel and AMD it was always a "known" option for improving performance but it has its drawbacks in terms of cost (die-size) and you can rest assured some very smart people were making some very intelligent decisions (at both companies) as to
when going with IMC would provide the greatest returns to shareholder value.