Originally posted by: futuristicmonkey
If, instead of Prescott being introduced, when we first saw 90nm from Intel...what if they had just shrank Northwood to 90nm? Like the way that AMD did? It seems to me that the way AMD did it, caused excitement and anticipation in we, the consumers. I mean..when people heard of 90nm Athlons, they immediatley thought of the overclockign potential etc...
However, Intel introduced Prescott..longer pipeline..more heat...and inefficient until at higher speeds...which seem to be limited right now. For example - the 4ghz Prescott was cancelled (is that right) because of crappy output. (please correct me if needed)
What do you think could have happened if Intel just shrank Northwood to 90nm in the first place?
<disconnect>
You make it sound like Intel made a conscious decision to deliberately release a hot, inefficient, longer pipelined CPU, when it was not their intention at all.
Intel had their road map all planned out until 2007; 130nm was called Northwood, 90nm was called Prescott, and 65nm was called Tejas. Tejas was supposed to pick up the torch (an appropriate analogy for what Prescott turned into - a heat factory) from where Prescott would end (in the 5 GHz range) and run all the way to 10 Ghz by the end of 2007 or early 2008!
They were cruising along with Northwood, going from 1.6 Ghz to 3 GHz + when they did the die shrink to 90nm. The sh!t hit the fan (I'm not even kidding). For some reason the chip ran hotter than usual (die shrinks always require lower voltage and higher current and thus produce more heat, but there was never the type of current leakage and heat spikes that Prescott had before that, at least not for Intel). They debuted Prescott at 2.8 Ghz, which was the highest they could get it stable (this being essentially what you're talking about, the 90nm "Northwood"). Since 2.8 Ghz Prescott (also being 20 pipelines, 512K L2 cache, etc - the same as Northwood) is quite obviously slower than 3.0/3.2 Ghz Northwood, they had to find a way to get the clockspeed higher.
Roadmaps were drastically redrawn; Tejas was cancelled altogether.
It turned out, increasing the pipeline was the only way to get the clockspeeds up past 3 Ghz for this 90nm chip. Unfortunately again, increasing the pipelines means a slower performing chip. So, they upped the L2 cache size to 1 MB and did a few minor CPU optimizations to compensate, so that the newly redesigned (and 6-month delayed) Prescott now performed, more or less the same as Northwood at a given clockspeed (it's actually about 2-4% slower, roughly).
So now, even with it's gigantic 31-stage pipeline, Intel is having problems breaking the 4 Ghz barrier. The much vaunted "Netburst" architecture (just another codename name for the P4) is hitting the wall. If they went down to 65nm, they could shatter the 4 GHz barrier, for sure, but Intel doesn't plan 6 months in advance - they plan 5 years in advance. 65nm is still a couple years off and with the P4 struggling so much already, how far could it take them? It's diminishing returns...
Intel appear to be adopting an architecture closer to the Pentium M in the future - essentially going for IPC (instructions per clock) again over the Netburst philosophy - highest MHz.
Dual Core from both AMD and Intel show that it's getting harder and harder to increase CPU clockspeed. And with the current design of the P4 (consuming 110 Watts at peak, or 85 Watts for the upcoming E0 stepping P4's), a dual cored P4 will be an insane power consumer (and hence another reason why a lower power "Pentium-M like" design will be necessary in the future.
I still have no idea where the industry will go after dualcore, whether 65nm and beyond will allow these chips to run significantly faster, or if we will be moving to four cores a few years after that. Time will tell...