Question Intel had a 7 GHz CPU years ago

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

511

Diamond Member
Jul 12, 2024
3,655
3,447
106
My point was that POTUS can force Intel to revive P4 but who will convince the POTUS that P4 is a national treasure? That a 10+ GHz CPU will be a great American achievement?
Time to run for US President and be the first president to have a 10Ghz Chip fabbed in your name.
Pentium 4 10Ghz Kavinski edition 😛
 

DavidC1

Golden Member
Dec 29, 2023
1,749
2,828
96
Or worse, the Celeron D(isaster).
Prescott was what made Celerons reasonable. The Northwood Celeron sucked. In the anemic cache range Celerons worked at, Prescott's uarch advancements allowed it to be 25% faster per clock than the Northwood Celeron. Northwood Celeron was the disaster.
Speaking of graphics the i845G integrated graphics from this era has kind of a bad reputation but I remember it as being the first semi usable integrated graphics. It could run Half Life and Counter Strike (barely) and I’m guessing a whole generation of gamers probably started with the i845. It seems like the rate of progress of integrated graphics accelerated rapidly from there.
I liked the 865G. The Extreme Graphics 2 in 865G was twice the performance of 845G, even though it kept the 1 pipe/2 TMU config. It was probably the fastest 1 pipe/2 TMU graphics ever made. They took that, added partial DX9 support, quadrupled the pipes and that came out to be GMA 900 in 915G chipset.

The real ancestor to current Intel iGPUs is the GMA X3000 in the G965 chipset that came with Core 2 generation. The skimped on geometry unit so much that when I got Celeron D to save on cost until I could get Core 2, the CPU upgrade pretty much doubled the performance in games.
I'd love to see how this stacks up against the low power stuff from a few years later, the likes of Bay Trail and Kabini. I suspect the comparison would be pretty horrific.
Northwood performs about equal to the in-order Atom. You needed 1.6GHz Atom to equal 800MHz low power Core 2. Bay Trail is much faster per clock than Northwood. Kabini was faster but Bay Trail outperformed it because of clock difference while consuming fraction of the power. At 2.4GHz, Bay Trail is basically 3.8GHz Prescott in a device weighing about the same as the heatsink that's needed to cool Prescott.
 
Last edited:
  • Like
Reactions: Thibsie
Jul 27, 2020
26,939
18,525
146
At 2.4GHz, Bay Trail is basically 3.8GHz Prescott in a device weighing about the same as the heatsink that's needed to cool Prescott.
For desktop use, I don't care about the weight or power use.

Forget Bay Trail.

Here's Cherry Trail begging P4 for mercy:
And the useless cherry actually enjoys a SIGNIFICANT process advantage. 14nm shrink of the same CPU would've made cherry cry even harder.

Despite a two thread disadvantage, the P4 trounces the cherry in Photo Filter MT subtest and keeps up in the text processing subtest. This is with a thread and process disadvantage, folks!
 

DavidC1

Golden Member
Dec 29, 2023
1,749
2,828
96
Some fancy rumors that I hoped would come true with Prescott. It was in a post in a website named geek.com or something. It was very active until it got disbanded.

-Hyperthreading 2
-3 decode
-16-24K uops Trace Cache(2-3x Northwood)
-Enhanced branch prediction

Back then there were nothing about extended pipeline stages. When I saw Intel presentation about 31 stages, my heart sank. Actually, if they kept the above specs without the extended stages, Prescott would have been significantly faster per clock. Of course the 3 decode was just a rumor.

Also, despite Nosta's fantasy about FD-SOI, 22nm Tri-Gate/FinFET transistor achieves most of the FD-SOI effects while improving gate control. That's because if you look at the picture, the "Tri-Gate" has only one thin side that's contacting the substrate.

Planar vs Trigate: https://www.tel.co.jp/museum/magazine/material/150227_report04_01/img/img_report04_03.jpg
FD-SOI vs Trigate: https://www.hardware.fr/medias/photos_news/00/32/IMG0032086_1.jpg

Thus, it achieves most of the effects of SOI without needing the extremely thin layer, while offering other benefits like improved gate control. GAA completes the transition, because instead of 3 sides surrounding the gate, it's surrounded by all 4 sides.
Here's Cherry Trail begging P4 for mercy:
And the useless cherry actually enjoys a SIGNIFICANT process advantage. 14nm shrink of the same CPU would've made cherry cry even harder.

Despite a two thread disadvantage, the P4 trounces the cherry in Photo Filter MT subtest and keeps up in the text processing subtest. This is with a thread and process disadvantage, folks!
That's a 1.92GHz Cherry Trail part versus a 3GHz Prescott. Despite the massive clock difference it's only 12% faster. It'll be closer too, because it's combined Int/FP, and FP wasn't even fully pipelined in Atoms until Goldmont.

The Atom X5 also fit in phones, hence it had to downclock in multi-thread. P4 as a CPU is trash. I know why you like them. I often thought of "what if's" a lot as well.

I can put it this way too. I think you like being "green" right? Well, that Cherry Trail is 40x "greener".
 
Last edited:

LightningDust

Member
Sep 3, 2024
57
111
66
Sorry. You can't say that unless you worked on it and had to put out a fire caused by booting that CPU :p

Sure I can.

Netburst was a set of bad ideas, implemented heroically, and a product of an era of Intel design that was heavily dependent on micro-benching. (You ever wonder why Netburst family devices seem like they're designed for tiny and highly deterministic code footprints? There's your answer! Combine the dependency on trace cache hits with needing to share that tiny little trace cache between two threads, and you get a picture of a truly baffling design philosophy.)

The fact it worked as well as it did was damn close to miraculous, but the world was not in need of more of it.
 

zir_blazer

Golden Member
Jun 6, 2013
1,242
539
136
I don't know why you have this what-if obsession with Tejas. The only period where I remember than the P4 was really good was with Northwood C, the ones with 800 MHz Bus and Hyper Threading were quite above the Athlon XPs, albeit that came at a BIG price premium. The less is said about Willamate at release the better, these couldn't consistently beat the P3 Coppermine predecessors, very similar to AMD launch of Bulldozer vs Thuban and even Prescott vs Northwood C. And while Northwood was ok even if more expensive than AXP, also remember to make a pit stop at SNDS (Sudden Northwood Death Syndrome).
For all practical reasons, the actual question here is why Intel prefered the Prescott oven instead of a simple Northwood shrink to 90nm. I mean, they actually DID that with P3 Tualatin, and it made the rest of the P4 lineup appear as some kind of joke. I guess that things like AMD64 support would have guaranteed a major redesign anyways. Heck, they ALSO did that when they shrinked 90nm Prescott as 65nm Cedar Mill. If they took that over Tejas and its shrink, we can speculate that it couldn't beat what it was intending to replace.
 
Jul 27, 2020
26,939
18,525
146
If they took that over Tejas and its shrink, we can speculate that it couldn't beat what it was intending to replace.
The x86 timeline we deserved was altered by the political meddling of IDC. Core should've gone down and down in power. We could've been at Apple silicon efficiency level now without the miserable Atom attempt and failure. Laptops and even phones would've had days of battery life despite running x86 code. Meanwhile, P4 would've fulfilled the need for insane speeds.
 

Thunder 57

Diamond Member
Aug 19, 2007
3,879
6,533
136
The x86 timeline we deserved was altered by the political meddling of IDC. Core should've gone down and down in power. We could've been at Apple silicon efficiency level now without the miserable Atom attempt and failure. Laptops and even phones would've had days of battery life despite running x86 code. Meanwhile, P4 would've fulfilled the need for insane speeds.

It was funny for awhile but you must know how wrong you are, right? Right?
 
  • Like
Reactions: LightningDust

LightningDust

Member
Sep 3, 2024
57
111
66
The damn thing hit 4.65 GHz overclocked at 65nm. Who knows how well it could have scaled with subsequent shrinks/tocks.

Dawg, Power6 shipped in commercial volumes at 5.0GHz on 65nm, and it did it while being able to sustain decode, issue, and execution of four instructions per cycle, not Pentium4's measly single instruction.

"It overclocked to 4.6, ergo it wasn't a completely broken microarchitectural concept" is certainly a take. But hey, I'm remembering why I stopped posting here.
 
Jul 27, 2020
26,939
18,525
146
Dawg, Power6 shipped in commercial volumes at 5.0GHz on 65nm, and it did it while being able to sustain decode, issue, and execution of four instructions per cycle, not Pentium4's measly single instruction.
So you don't believe in second chances? Intel can stupidly and criminally waste resources on extremely pathetic Atom CPUs (I have the N2840 I think and it couldn't finish Windows upgrade even after 24 hours. I had to forcefully shut it down and never turned it on again) but God forbid that Intel spends "some" resources on improving a promising high clocking architecture.

The atrocious N5030 is STILL selling in laptops TODAY: https://browser.geekbench.com/v6/cpu/compare/12685378?baseline=13294297

Again, a 65nm supposedly worse ever CPU beating, in many of the subtests, a 14nm sorry excuse of a CPU with a supposedly superior architecture and "clever" design. What do you think the result would be if the P4 was on 14nm too?

But hey, I'm remembering why I stopped posting here.

I'm really sorry, your Majesty, that a little disagreement or opinion interferes with the peace you would otherwise have experienced carrying out your daily routine of dusting your old Itanium server (sorry not sorry).
 

Thunder 57

Diamond Member
Aug 19, 2007
3,879
6,533
136
So you don't believe in second chances? Intel can stupidly and criminally waste resources on extremely pathetic Atom CPUs (I have the N2840 I think and it couldn't finish Windows upgrade even after 24 hours. I had to forcefully shut it down and never turned it on again) but God forbid that Intel spends "some" resources on improving a promising high clocking architecture.

The atrocious N5030 is STILL selling in laptops TODAY: https://browser.geekbench.com/v6/cpu/compare/12685378?baseline=13294297

Again, a 65nm supposedly worse ever CPU beating, in many of the subtests, a 14nm sorry excuse of a CPU with a supposedly superior architecture and "clever" design. What do you think the result would be if the P4 was on 14nm too?



I'm really sorry, your Majesty, that a little disagreement or opinion interferes with the peace you would otherwise have experienced carrying out your daily routine of dusting your old Itanium server (sorry not sorry).

If P4 showed any hope they wouldn't have cancelled it. Why no second chance for BD? It was a speed demon! Imagine how high it would clock on N3P /s.

Then you go on to compare a bottom of the barrel Celeron Atom to Intel's high end at the time. A more fair comparison would be against a P4 Celeron with SDRAM. A 7.5W Celeron Atom is really slow? *Surprised Pikachi face*

It was not a "promising hich clocking architecture". I see you must've never gotton around to reading about Netburst on C&C like I recommended. Seriously, anything that showed promise was pretty much thought over and brough back later. The obvious example being the crappy trace cache vs uop caches of today.
 
Jul 27, 2020
26,939
18,525
146
If P4 showed any hope they wouldn't have cancelled it. Why no second chance for BD? It was a speed demon! Imagine how high it would clock on N3P /s.
They cancelled P4 because of IDC, just like they stupidly cancelled Royal Core just when Intel could've used a really crazy fat data crunching monster. When are you going to understand that cancellations at Intel often happen not due to merit but due to internal politics. As someone wrote about Gates' Microsoft era, you got your project greenlighted or kept going if you could shout the loudest during the meeting and intimidate everyone else into shutting up.

Bulldozer almost killed AMD. They don't have the luxury of R&D unlike Intel with the billions they used to rake in.

Once P4 gets a modernized version fabbed, I will start championing for Bulldozer's comeback. Also, never used a FX-9590 and no one on these forums wants to donate one to me.
 
Jul 27, 2020
26,939
18,525
146
Then you go on to compare a bottom of the barrel Celeron Atom to Intel's high end at the time.
Because DavidC1 likes saying how efficient and performant Atoms are compared to P4.

I see you must've never gotton around to reading about Netburst on C&C like I recommended.
Pentium 965 can work with DDR3. Did they use that? I don't think so. They also don't overclock. They are unfortunately biased like everyone else in believing the P4 deserved to be canned instead of looking at future possibilities.
 

Thunder 57

Diamond Member
Aug 19, 2007
3,879
6,533
136
They cancelled P4 because of IDC, just like they stupidly cancelled Royal Core just when Intel could've used a really crazy fat data crunching monster. When are you going to understand that cancellations at Intel often happen not due to merit but due to internal politics. As someone wrote about Gates' Microsoft era, you got your project greenlighted or kept going if you could shout the loudest during the meeting and intimidate everyone else into shutting up.

Bulldozer almost killed AMD. They don't have the luxury of R&D unlike Intel with the billions they used to rake in.

Once P4 gets a modernized version fabbed, I will start championing for Bulldozer's comeback. Also, never used a FX-9590 and no one on these forums wants to donate one to me.

That's just like, your opinion, man! Also no way in hell does the P4 ever get modernized.

Because DavidC1 likes saying how efficient and performant Atoms are compared to P4.


Pentium 965 can work with DDR3. Did they use that? I don't think so. They also don't overclock. They are unfortunately biased like everyone else in believing the P4 deserved to be canned instead of looking at future possibilities.

You don't overclock. You compare stock vs stock. Also, overclocking isn't going to fix a tiny L1 cache, the fact that L1 is writethough, the fact it is one way decode, etc etc. At this point I think I'd rather bang my head against the wall than discuss this anymore. You know what they say when you think you are right even everyone else tells you that you're wrong. You're wrong.