• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

If Intel just shrank Northwood to 90nm...

futuristicmonkey

Golden Member
If, instead of Prescott being introduced, when we first saw 90nm from Intel...what if they had just shrank Northwood to 90nm? Like the way that AMD did? It seems to me that the way AMD did it, caused excitement and anticipation in we, the consumers. I mean..when people heard of 90nm Athlons, they immediatley thought of the overclockign potential etc...

However, Intel introduced Prescott..longer pipeline..more heat...and inefficient until at higher speeds...which seem to be limited right now. For example - the 4ghz Prescott was cancelled (is that right) because of crappy output. (please correct me if needed)

What do you think could have happened if Intel just shrank Northwood to 90nm in the first place?

<disconnect>
 
P.S. - burn that petition. Blizzard is a totally different company now, the brainchilds of it have since moved on and a starcraft sequal will most likely be a huge flop.
 
Originally posted by: AWhackWhiteBoy
i think thats exactly what they did, as well as optimize a few things in the process.

No...Prescott has 31 pipelines compared to Northwood's 20. That's the biggest difference I can think of.

What I meant was what if they put the Northwood production on 90nm fabs? Like the core is exactly the same, but maybe with more cache. I'm under the impression that 90nm provides less power consumption and better overcloackability. Am I wrong?
 
it was suppose to yeild those results but with the fabrication is picked as well as the changes it just made it a pig.

why re-release a 2 year old processor though? doesn't seem like a very good idea.
 
Well the northwood was definitely showing its age and I am not sure the 90nm process would have dropped the temps much if at all and thus it would have been a factot and thus Intel though longevity in continued ramp up was unlikely. While the vcore may have dropped a bit the problem I think become that still as you rap speed the wattage increases again and now the thinner layers of material now are much more susceptable to heat . Notice how AMDs new 90nm process has an operating temp 5c lower then the 130nm newcastles. The also become very sensitive to volatge and can experience power leakage.

Basically I am way over my head explaining this technical garb...

Basically 90nnm process requires less vcore as smaller parts more sensitive to voltage

IE notice how the Willamette .18 chips were 1.75v and many had then oc'd near 2.0 and when they went down to .13 chips they drop the vcore to 1.5v with the 1.6a and those chips could not handle much more then 1.8v.

The other thing I think is actually the amps continue to climb and the voltage drops but overall the higher speeds of each succession results in a continue ramping up of watts and thus Intel's enemy HEAT!!!!

Intel with its longer pipeline insured its ability to ramp to high speeds with limited changes of architecture from the Willamette to the Northwood however its that continue cliomb of mhz that eventually led to Intel being 40-50% higher in watts of heat produced pover AMDs best. AMD still has that headroom before it has to deal with too much heat.

Someone clue me in if I am way off....
 
Originally posted by: futuristicmonkey
If, instead of Prescott being introduced, when we first saw 90nm from Intel...what if they had just shrank Northwood to 90nm? Like the way that AMD did? It seems to me that the way AMD did it, caused excitement and anticipation in we, the consumers. I mean..when people heard of 90nm Athlons, they immediatley thought of the overclockign potential etc...

However, Intel introduced Prescott..longer pipeline..more heat...and inefficient until at higher speeds...which seem to be limited right now. For example - the 4ghz Prescott was cancelled (is that right) because of crappy output. (please correct me if needed)

What do you think could have happened if Intel just shrank Northwood to 90nm in the first place?

<disconnect>

You make it sound like Intel made a conscious decision to deliberately release a hot, inefficient, longer pipelined CPU, when it was not their intention at all.


Intel had their road map all planned out until 2007; 130nm was called Northwood, 90nm was called Prescott, and 65nm was called Tejas. Tejas was supposed to pick up the torch (an appropriate analogy for what Prescott turned into - a heat factory) from where Prescott would end (in the 5 GHz range) and run all the way to 10 Ghz by the end of 2007 or early 2008!

They were cruising along with Northwood, going from 1.6 Ghz to 3 GHz + when they did the die shrink to 90nm. The sh!t hit the fan (I'm not even kidding). For some reason the chip ran hotter than usual (die shrinks always require lower voltage and higher current and thus produce more heat, but there was never the type of current leakage and heat spikes that Prescott had before that, at least not for Intel). They debuted Prescott at 2.8 Ghz, which was the highest they could get it stable (this being essentially what you're talking about, the 90nm "Northwood"). Since 2.8 Ghz Prescott (also being 20 pipelines, 512K L2 cache, etc - the same as Northwood) is quite obviously slower than 3.0/3.2 Ghz Northwood, they had to find a way to get the clockspeed higher.

Roadmaps were drastically redrawn; Tejas was cancelled altogether.

It turned out, increasing the pipeline was the only way to get the clockspeeds up past 3 Ghz for this 90nm chip. Unfortunately again, increasing the pipelines means a slower performing chip. So, they upped the L2 cache size to 1 MB and did a few minor CPU optimizations to compensate, so that the newly redesigned (and 6-month delayed) Prescott now performed, more or less the same as Northwood at a given clockspeed (it's actually about 2-4% slower, roughly).

So now, even with it's gigantic 31-stage pipeline, Intel is having problems breaking the 4 Ghz barrier. The much vaunted "Netburst" architecture (just another codename name for the P4) is hitting the wall. If they went down to 65nm, they could shatter the 4 GHz barrier, for sure, but Intel doesn't plan 6 months in advance - they plan 5 years in advance. 65nm is still a couple years off and with the P4 struggling so much already, how far could it take them? It's diminishing returns...

Intel appear to be adopting an architecture closer to the Pentium M in the future - essentially going for IPC (instructions per clock) again over the Netburst philosophy - highest MHz.

Dual Core from both AMD and Intel show that it's getting harder and harder to increase CPU clockspeed. And with the current design of the P4 (consuming 110 Watts at peak, or 85 Watts for the upcoming E0 stepping P4's), a dual cored P4 will be an insane power consumer (and hence another reason why a lower power "Pentium-M like" design will be necessary in the future.

I still have no idea where the industry will go after dualcore, whether 65nm and beyond will allow these chips to run significantly faster, or if we will be moving to four cores a few years after that. Time will tell...
 
Back
Top