Was the P4 an 'engineering failure'?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Jan 31, 2002
40,819
2
0
IMO

Williamette blew.
Northwood-A was a slight improvement.
Northwood-B was a good step up.
The Northwood-C was the pinnacle of Netburst, and I'd still consider a 3.0C a pretty powerful machine today.
Gallatin was overpriced.
Prescott was pitiful.
Cedar Mill was too-little-too-late.

- M4H
 

f4phantom2500

Platinum Member
Dec 3, 2006
2,284
1
0
The price/performance ratio was pretty sh!tty compared to what AMD was offering, and they only really surpassed AMD towards the end of the K7's lifespan, right around when they hit the 3GHz mark. Even then, at least in the mainstream segment, an XP 2500 was like 1/2 the price of a 2.4C and most would OC to XP 3200 speeds or higher, and that wasn't bad for $95. I bought my XP 2500 for that price retail from the egg when they were new, so I was lucky enough to get one with an unlocked multi, and when I checked prices like a year later it was still $90. Granted AMD didn't really have anything to compete with Intel's high end until Athlon 64, they didn't have anything that expensive either.

In terms of efficiency, it was a horrible failure to the K7, but the ultimate goal here was overall speed, and granted Netburst could only surpass K7 once K7 hit its limit, it showed that it did have a higher limit than K7, so in that sense it was a success. Also note some chips like the 1.6(A, was it?) and the P4-C's could overclock like crazy.
 

Pacemaker

Golden Member
Jul 13, 2001
1,184
2
0
They tried something that had not been tried and managed to make a working chip. They were short sighted to not see the problems with the ramping up of the clock speed. Had they actually been able to get the chips to do what they thought they could get them to do (10 GHz was estimate I was hearing for the top end of netburst). In the end they overestimated how fast you can feasibly increase clock speed with the current techniques used. Would I call that an engineering failure? No. It was more of a risk that didn't pan out (but was still successful for much of it's product life).
 

Brunnis

Senior member
Nov 15, 2004
506
71
91
I consider the P4 a failure when viewing it from an engineering perspective (I'm a soon to be electronics engineer, so that's the view I tend to have!). If we can get over the whole part about the chip actually working and selling well and that Intel made a lot of money out of it, we're still left with a design that fell horribly short of its goals. Not only did it take a lot longer to ramp the frequency than initially anticipated, but the chip also couldn't even come close to the high frequency potential enabled by the architecture. In a way, the Netburst architecture was probably a good learning experience and many of the things learned can likely be applied to new designs. I'd still not consider the actual architecture an engineering success, though.

This is a layman's opinion and I'm sure there are as many of those as there are variants of Netburst CPUs. :p
 

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
Brunnis.. in your scenario, Netburst isn't the engineering failure, the limitations imposed on it by the underlying process technology are. There is a difference.

 

Chesebert

Golden Member
Oct 16, 2001
1,013
15
81
As an EE and someone who had worked on certain chips, I see P4 as an idealistic view on ways to increase performance via pipeline increase and a way to prove certain theory in the real-world. This ideal is clearly demonstrated in P4's architecture and would certainly make Alpha team proud. The pitfall wrt P4 is that the chip does not exist in simulation but in actual world and the real world will not allow this idea, this computing theory, to manifest. Many of the decisions made with P4 were made without realizing certain real world constraints, e.g. clock distribution, chip size, etc. The other pitfall with P4 as an architecture is that the engineer fail to realize software engineers don't really care about their code and don't really care about the size or efficiency of their codes.

I a perfect world, P4 would run with 100 stages and at 10Ghz, alas that's unrealistic, at least right now. So we go back to the other doctrine of system design, that is multi-issue/wide/pipeline. Not an engineering failure, just a different paradigm and we find it doesn't work with TODAY's fabrication/validation/simulation technology. I still think sometime down the line P4 will get revived atleast the paradigm of P4 will get revived.

m2c
 

justly

Banned
Jul 25, 2003
493
0
0
Yes, I do consider the P4 to be a engineering failure, a failure in engineering design not implementation.

The failure of the P4 can be seen on many different perspectives. The first indication of failure came at the P4 launch when it was obvious it wasnt a balanced architecture. In some situations it benchmarked very well, and in others it had such abysmal performance that the architecture it was meant to replace could beat it by greater than 15% even at a 33% speed disadvantage. Even in the benchmarks where the P4 excelled it rarely if ever enjoyed a performance delta higher than the speed delta between it and its predecessor.

Intels decission to maintain the P3 architecture (if only in the mobile market) again signifies that the P4 is a failure, this time on a size and power metric. Then the transition to an even more modified P3 architecture (this time using the Pentium-M nomenclature) proved even further that the P4 had little to offer other than MHz marketing propaganda. To further insult the P4 architecture some motherboard manufactures and performance oriented end users made the choice to use intels mobile processor instead of the P4 in desktop systems.

The final proof that the P4 is a failure is the simple fact that it was discontinued, and from all indications it is no longer active in the research and development phase (something that never happened to its predecessor).

Durring the time that the P4 was Intels main consumer processor line some good things did evolve. We saw SIMD instructions expanded, bus speeds increased, bandwidth improvments, SATA and PCI-E appear and lets not forget Hyper-Threading. Its just to bad that most of those improvements would still have happened even if Intel never gave us the P4. If the P4 architecture did anything for the consumer it was to provide Intel engineers with a new set of problems on how to make such a failed design actually compete. This accelerated reasearch and development.

The real perplexing question is, where would Intel be now if it had avoided the P4 alltogether? I supose if we could answer that question we really would know if the P4 was a failure or not.

note: I only used intels own products to describe why I believe the P4 design is a failure (I believe this is the best way to evaluate if the P4 design is a failure or not).
 

secretanchitman

Diamond Member
Apr 11, 2001
9,352
23
91
Originally posted by: MercenaryForHire
IMO

Williamette blew.
Northwood-A was a slight improvement.
Northwood-B was a good step up.
The Northwood-C was the pinnacle of Netburst, and I'd still consider a 3.0C a pretty powerful machine today.
Gallatin was overpriced.
Prescott was pitiful.
Cedar Mill was too-little-too-late.

- M4H

QFT.

northwood C FTW!!!!!!!!!!!
 

Cogman

Lifer
Sep 19, 2000
10,286
145
106
for me, this is a yes and a no answer. I think it was fairly clear that Intel was not going for a Processor ran efficiently, However I think that was more a marketing decision then an engineering one (which has kind of back fired on them given the current situation). I can remember the days of having to explain to all my friends (some still pop up now) that in a benchmark between 3.0 GHZ processor vs a 2.0 GHZ processor the 3.0GHZ processor would not necessarily win all the time (Very hard principle to get people to understand that big numbers don't always mean better things). However, that marketing stratagy at the time, and it was on a good number of tech shows selling dells, was to tell people "You need to upgrade now because your computer is a 1 GHZ and we have a 3 GHZ that will be 3x faster!!! Plus it has HyperThreading which will make it work like 2 processors, making it 6 TIMES FASTER!!".

While the Tech Geeks where pulling out their hair and switching servers over to AMD's, the average consumer loved it and had little persuasion to switch to an AMD system (Who wants a mesily 1.8GHZ when you can have 3GHZ). Of course it shot Intel in the foot because now they have changed naming convention and completely ignored the GHZ word all together, Now they have to convince everyone that "Yeah, your 3 GHZ processor actually sucks compared to our new 1.8 GHZ core 2 dou CPU". But when you are almost the only name the Consumer really knows, you can get away with crap like that.

As far as Design goes, history is quite clear that it was a poor design. As far as accomplishing its goals, it succeeded to some extent.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: myocardia
Got a link for that? I find it extremely hard to believe that AMD's marketshare has risen only 4.5% since Q1 2001. I mean, at that time, there were no companies (at least companies of any size, selling any/many systems) putting them into pre-built systems, and they definitely didn't have a large portion of the server market. So, I loved to see a link, proving that I'm the one doing the revisionist history.;)

And for anyone interested in what someone who gets paid to discuss processors and computer components has to say about the P4, this is a pretty interesting read.

There are 2 metrics here of note, and it's important to consider both.

Marketshare - the number of total processers sold. ColdPower is quite correct...marketshare has been up and down until the K8 was released. Since then it has been mostly a steady upward movement.

Revenue Share - The share of the total money spent on x86. This has been growing quite steadily and more rapidly for AMD over the last 5 years or so...the reason for the difference is that AMD's marketshare has been moving slowly, but their ASP (Average Sale Price) has also been growing. Remember that prior to Opteron, AMD never had more than .05% marketshare in servers...they are now closer to 30%. This is the biggest profit section of x86.
 

judasmachine

Diamond Member
Sep 15, 2002
8,515
3
81
Originally posted by: yh125td
It was Intel's push for clock speeds that makes people who dont know much about computers hesitant to buy a new 2.4gHz e6600 to replace their old 3.06gHz p4

Yup I got some head scratching from my less geeky friends when I told them I dumped my 3.0C for a 1.86 C2D.
 

lopri

Elite Member
Jul 27, 2002
13,314
690
126
Originally posted by: RichUK
This is about the 50th reincarnation of this type thread.

Do we really need to keep going over the same stuff? Is there seriously anything more we can debate over?
This plus;

- Is AMD doomed?
- Core 2 Duo vs X2?
- NV30?
- George Bush?

I don't even click on those threads. This thread I clicked 'accidently'. ;)
 

Brunnis

Senior member
Nov 15, 2004
506
71
91
Originally posted by: zsdersw
Brunnis.. in your scenario, Netburst isn't the engineering failure, the limitations imposed on it by the underlying process technology are. There is a difference.
Well, the architecture itself wasn't really the failure, but rather the theories and principles it relied on (and needed) to make it shine. The engineers failed to anticipate the problems that high frequencies would bring and then designed a complex architecture around what was basically a doomed concept. Since high frequency was the thing wanted and needed by these chips, I'd say that verifying that those high frequencies would be possible was something very much tied to the product itself.
 

RichUK

Lifer
Feb 14, 2005
10,341
678
126
Originally posted by: lopri
Originally posted by: RichUK
This is about the 50th reincarnation of this type thread.

Do we really need to keep going over the same stuff? Is there seriously anything more we can debate over?
This plus;

- Is AMD doomed?
- Core 2 Duo vs X2?
- NV30?
- George Bush?

I don't even click on those threads. This thread I clicked 'accidently'. ;)


LOL :D
 

velis

Senior member
Jul 28, 2005
600
14
81
Originally posted by: Brunnis
Well, the architecture itself wasn't really the failure, but rather the theories and principles it relied on (and needed) to make it shine. The engineers failed to anticipate the problems that high frequencies would bring and then designed a complex architecture around what was basically a doomed concept. Since high frequency was the thing wanted and needed by these chips, I'd say that verifying that those high frequencies would be possible was something very much tied to the product itself.

This is probably exactly what happened. Due to the GHz race that took place at that time, the decision was made to create an architecture that would be able to scale. It actually worked pretty well up to 2.8GHz and was a huge success, both performance and sales wize.

The problem was that they just couldn't make the transistors go as fast as they wanted them to go, which became painfully obvious even before Prescott. The first indication of this problem was even the initial release which fell back from 2.0GHz (if I remember correctly) to 1.5. Prescott was intended to make the internal paths even shorter and faster allowing it to scale past x GHz (x probably being >5), but the manufacturing process itself couldn't provide transistors that could switch that fast. That's P4's only failure. Otherwise it was a fine design which actually did scale pretty well, just not enough.
 

Yellowbeard

Golden Member
Sep 9, 2003
1,542
2
0
Engineering failure? No, it worked and some of them worked very well. The engineering phase has nothing to do marketing and sales. I think Intel rested on their laurels too long and did not pursue other technologies until it was too late. They got complacent and AMD capitalized.

At the Northwood peak, there was no desktop CPU that could come close in video rendering. If there were as many people that absolutely focused on the strengths of the P4 at that time, we would say that the AMD of the time was an engineering failure. If gaming was not so popular, a few years ago AMD would have had the engineering failure. So much of it is relative.

Oh, and Myocardia, you spelled prescHOTT wrong ;)
 

xtknight

Elite Member
Oct 15, 2004
12,974
0
71
Market wise, no. Technical/architectural wise, yes it was disappointing. HT was cool, though.
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Originally posted by: coldpower27
Since I have actually sourced my information now. I want to see a direct link on who has actually said that they made the P4 was designed to scaled to 10GHZ, without a link regardless of it is true or not cannot be used in this thread.
Google gives me 103 links to just one higher-up at Intel claiming 10 Ghz with the P4 architecture. I'm sure I could find that many more, at least, if I knew the name of another person at Intel, who talks to the press.
Originally posted by: StopSign
Wasn't Williamette the original P4?
Yeah, the P4 debuted in November 2000, with the 1.4 & 1.5 Ghz Williamette.
Originally posted by: Viditor
There are 2 metrics here of note, and it's important to consider both.

Marketshare - the number of total processers sold. ColdPower is quite correct...marketshare has been up and down until the K8 was released. Since then it has been mostly a steady upward movement.

Revenue Share - The share of the total money spent on x86. This has been growing quite steadily and more rapidly for AMD over the last 5 years or so...the reason for the difference is that AMD's marketshare has been moving slowly, but their ASP (Average Sale Price) has also been growing. Remember that prior to Opteron, AMD never had more than .05% marketshare in servers...they are now closer to 30%. This is the biggest profit section of x86.
Ahh, you're totally right, Viditor; my mistake. When I said "marketshare", I was in fact referring to "revenue share". I knew you'd be along sooner or later, to clarify. I just didn't think it would take you so long.;)
Oh, and Myocardia, you spelled prescHOTT wrong;)
Those things ran so hot, they didn't deserve a "c" in their name, unless it had a 100° beside it.:laugh:
 

elkinm

Platinum Member
Jun 9, 2001
2,146
0
71
Hyper threading kept the P4 alive and it is in general the best little feature I have ever seen. If a program hangs with full CPU usage on a single core it is dead. Loading task manager takes minutes if not longer. With hyper threading you always have the extra cycles to load task manager and kill the hung app.

Without hyper threading, the P4 would have died long ago. Using lots of equally clocked P4s and Pentium Ds, the Pentium D has little to no benefit over a P4 and if I disable hyper threading or the second core. The single core is maxed out so often that the PC is nearly useless.

The P4 is decent for it's time but it was never a great success. From the start it was whipped by Athlons and only Hyper threading gave them a real edge as mentioned above. But then came the K8.

The Pentium M/Core Duo is what the P4 should have been. And the C2D is like the P5.
 

nismotigerwvu

Golden Member
May 13, 2004
1,568
33
91
I know I don't say much here but here's my feelings....When I think failure I think of the 3DO or something thats so far from being competitive that it dies in the market and more than likely brings the company with it. Did the P4 suck? Yeah, for the most part one could argue that it didn't come near what we've come to expect from Intel, but still maintained a functional share of the market. Had it not been for Intel's massive lead in manufacturing technology I doubt this would have happened. To put it short, disappointment, not failure.
 

justly

Banned
Jul 25, 2003
493
0
0
Originally posted by: nismotigerwvu
....When I think failure I think of the 3DO or something thats so far from being competitive that it dies in the market and more than likely brings the company with it.

Look at it in a hypothetical scenario where AMD does not exist, and netburst was designed by a company other than Intel.

Suppose VIA had introduced a Cryix 4 based on netburst, while Intel relied on thier P3/Tualatin/Banias/Dothan/Yonah cores to compete. Even if VIA had no problem manufacturing, marketing, getting design wins and motherboard support for their new processor, would Intel update their compiler to support such a radically different design from another company? would Intel ever adopt another companies SIMD instruction?

Without the kind of support only Intel could provide, netburst would have died long ago (and probably taken the company with it).


 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
The P4B and P4C were both great.

The P4A, P4D, and pretty much all of the dual cores were low performers.

However, had they been priced more aggressively by Intel, they wouldnt have lost nearly as much marketshare to AMD.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: Acanthus
The P4B and P4C were both great.

The P4A, P4D, and pretty much all of the dual cores were low performers.

However, had they been priced more aggressively by Intel, they wouldnt have lost nearly as much marketshare to AMD.

They were pretty aggressively priced, their die sizes were huge compared to AMD's chips.
 

alkemyst

No Lifer
Feb 13, 2001
83,769
19
81
P4 is a broad field.

I do know the PIII-M 1.2 smoked the P4 laptop design up to 1.8GHz originally.

The PIII-S 1.4 is still in major use as web and file servers globally.

AMD came onto the field strong and up until recently dominated Intel in all but Enterprise applications. Intel owns the business world.

The latest batch of Intel has given AMD a run for their money though and through a lot of Intel's influence pushed AMD out of profitable markets (search for a known article that Intel paid Dell up to 1 billion dollars not to sell any AMD systems, since that article broke Dell is offering a handful of AMD systems.)

Å