Worst CPUs ever, now with poll!

Page 21 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

What's the worst CPU ever? Please explain your choice.

  • Intel iAPX 432

  • Intel Itanium (Merced)

  • Intel 80286

  • IBM PowerPC 970

  • IBM/Motorola PowerPC 60x

  • AMD K5

  • AMD family 15h

  • AMD family 10h


Results are only viewable after voting.

VirtualLarry

No Lifer
Aug 25, 2001
56,224
9,987
126
HP getting a first gen Core i5 (I don't remember the exact model number) which was dual core rather than quad.
That's... interesting.

I saw some refurb i5 machines labeled "dual-core" at Newegg. They might have been HP. I thought it was a typo. :p
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
Twice the IPC? That's it? P4 was hardly known for having high IPC. I would think Zen has twice the IPC of Core 2.

Pretty sure it doesn't have twice the IPC of Core 2...

http://www.cpu-world.com/benchmarks/Intel/Core_2_Duo_E8400_single.html

Skylake @ 4.2GHz (196%) still isn't quite twice as fast at ST as an E8400 @ 3.0GHz

Which by my calculations makes Skylake 40% faster than Core 2 clock for clock... which also means a first gen TR is only about 30% faster than Core 2 IPC wise...

So yeah, around twice the IPC of a P4 sounds about right.
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,414
8,356
126

shortylickens

No Lifer
Jul 15, 2003
82,854
17,365
136
I had a celeron back in the WinME days.
never doing that again. Better to get a pentium and underclock it if you wanna save battery life or reduce heat.
 

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
Because Kaveri was the BEST Bulldozer of the bunch due the balanced GPU and L2 cache memory available.

Carrizo failed due the constrained L2 Cache, the awful desicion of doing the dynamic wattage consumption, allowing to put the best chips on constrained OEM designs (still, they are screwing people with the U tier of Intel and AMD).

Bristol Ridge was supposed to fix it, and they only did it on the laptop version. They became decent again.
From non-gaming perspective. XV is a blessing in disguise for FM2+ users. It has much lower TDP and feels snappier than SR for day-to-day tasks (browsing, office works, etc). I once had 860k, but ditched it for 845 because I have cramped room and couldn't stand 860k's heat.
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
yes, but saying P4, in general, is bad is stupid. Willamette at 1.5ghz was a decent upgrade to the P3-1000. The reason it was an awful chip is that you were forced to
buy RDRAM which was insanely expensive and at the time we were just a few years removed from massive SD ram prices. Northwood was great but, what people
forget is what made it great was the DDR chipset and then the 800mhz FSB with dual channel DDR.

I remember being 14 and Intel launching their granite bay dual channel DDR chipset, I worked at a pizzashop at night and worked my ass off to buy the asus P4G8X,
DDR400 DC kit, and a S478 celeron 2ghz. It would be another 2 months before I could afford the P4 3.06HT. That celeron managed to overclock to near 3ghz and was one of the best Intel bang for the bucks ever.

Willamette being an upgrade to the P3-1000, that ran at 2/3rd the clock rate was almost a no brainer. That the Pentium III-S (Tualatin, 512KB L2) would run rings around the 1.4 and 1.5Ghz Willamette chips in anything that wasn't strictly memory bandwidth bound was a bigger joke. While they were not great overclockers, with the right approach, you could get that FSB up to 150mhz with low CL ram and keep it in the game up to the 2.0Ghz chips. The biggest travesty was specifically gimping compatibility with the 440BX chipset, which limited RAM amount and other things. Had Intel decided to make it competitive, it would have been reasonable to adapt it to the faster P4 FSB and better chipsets.

I ran a Pentium III-S well into the P-4 era, only discarding it for a Prescott Dual core at the end of it's life.
 

BigDaveX

Senior member
Jun 12, 2014
440
216
116
The biggest travesty was specifically gimping compatibility with the 440BX chipset, which limited RAM amount and other things.
While Tualatin could have been treated a little better by Intel than it was, I can't really fault them for not making it compatible with a 3 year-old chipset that didn't even officially support the FSB speeds that Tualatin operated at. Plus, Intel did introduce the 830 chipset alongside Tualatin, which supported the same amount of memory as the 440BX; it's just that the market had moved on to the Pentium 4 by that point, so no-one outside of laptop makers bothered using it.
 

Hi-Fi Man

Senior member
Oct 19, 2013
601
120
106
From non-gaming perspective. XV is a blessing in disguise for FM2+ users. It has much lower TDP and feels snappier than SR for day-to-day tasks (browsing, office works, etc). I once had 860k, but ditched it for 845 because I have cramped room and couldn't stand 860k's heat.

Only problem with the Athlon X4 845 is that it's a laptop CPU in a desktop package. A lot of people don't know this but the PCIe interface on Carrizo was cut down more than you think, not only does the chip have half the PCIe 3.0 lanes but the extra four PCIe 2.0 lanes coming off of Kaveri were also completely removed. A lot of FM2+ boards used those four PCIe 2.0 lanes for an extra PCIe slot on the board; when you install an Athlon X4 845 in those boards you lose the ability to use that slot.
 
  • Like
Reactions: dark zero
Apr 20, 2008
10,161
984
126
The worst thing about Prescott was the power consumption, and the fact that it wasn't really any faster clock for clock than Northwood, in fact it had slightly lower IPC IIRC because Intel lengthened the pipeline even further in a bid to reach higher clocks, which ultimately failed.

But I would say that HT was probably ahead of its time, back then I had an Athlon XP and P4 Northwood and they were pretty close in benchmarks but actual smoothness of the system was better on the P4, and obviously multitasking as you said.
I got on board with more threads when I "upgraded" from a socket 939 A64 3500+ from a P4 2.8HT Northwood. I couldn't play music and Counterstrike at the same time without hiccups fairly often, even though my FPS went up significantly. That extra thread did a lot of heavy lifting. It made me upgrade to an X2 4200+.

Use a P4 3Ghz w/HT vs an A64 4000+ today with Chrome. There's no comparison.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,835
3,149
126
Prescott / Smithfields, were the worst cpu intel has ever made.
They even admitted this, and went back 1 generation to the P3 arch "dolthan" which later turned into the ever glorious Core arch.

Anyone that owned a prescott / smithfield knew them as another name... "space heater" because they ran obnoxiously hot, required insane amount of clock speed to get netburst somewhat working, and even made hyper threading usless until it was released on nehalem arch, aka, the first generation i7's.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
Prescott / Smithfields, were the worst cpu intel has ever made.
They even admitted this, and went back 1 generation to the P3 arch "dolthan" which later turned into the ever glorious Core arch.

Anyone that owned a prescott / smithfield knew them as another name... "space heater" because they ran obnoxiously hot, required insane amount of clock speed to get netburst somewhat working, and even made hyper threading usless until it was released on nehalem arch, aka, the first generation i7's.

They indeed ran very hot, but how was HT useless on Prescott? It worked perfectly fine on my Northwood P4 and I'm pretty sure Intel didn't 'break' HT with the transition to Prescott.
 

Cogman

Lifer
Sep 19, 2000
10,277
125
106
They indeed ran very hot, but how was HT useless on Prescott? It worked perfectly fine on my Northwood P4 and I'm pretty sure Intel didn't 'break' HT with the transition to Prescott.
I think he is referring to the fact that HT would pretty substantially nerf single threaded performance. Anywhere from something like 5->20% if I'm remembering my numbers correctly. Part of that was issues with the OS (They would move the cores that threads would execute on willy nilly, which is really bad for HT) and part of that was just the fact that cache sizes at the time were minuscule so running two threads at once had a decent chance of causing cache thrashing.

Now-a-days, caches are huge, OSes are better, and more applications are working towards parallel computing. So it is a pretty easy win. That just wasn't totally true when it was first introduced.
 
  • Like
Reactions: VirtualLarry

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
I think he is referring to the fact that HT would pretty substantially nerf single threaded performance. Anywhere from something like 5->20% if I'm remembering my numbers correctly. Part of that was issues with the OS (They would move the cores that threads would execute on willy nilly, which is really bad for HT) and part of that was just the fact that cache sizes at the time were minuscule so running two threads at once had a decent chance of causing cache thrashing.

Now-a-days, caches are huge, OSes are better, and more applications are working towards parallel computing. So it is a a\ when it was first introduced.

I didn't notice any significant ST degradation from HT with my P4 Northwood, as I said in my previous posts it actually felt a lot smoother and more responsive when multitasking compared to the single core/thread CPUs from that era.

It's one of the reasons why I completely skipped the A64 even though it had better ST performance. I held on to it until Core 2 in 2016 which actually makes the P4 one of my 'best' CPU purchases in terms of longevity.
 

BigDaveX

Senior member
Jun 12, 2014
440
216
116
They indeed ran very hot, but how was HT useless on Prescott? It worked perfectly fine on my Northwood P4 and I'm pretty sure Intel didn't 'break' HT with the transition to Prescott.
It wasn't useless; more the case that anything Prescott could do, Northwood (and Gallatin, if you could stomach the insane prices) could do as well or better while using less power.
 

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
A lot of FM2+ boards used those four PCIe 2.0 lanes for an extra PCIe slot on the board; when you install an Athlon X4 845 in those boards you lose the ability to use that slot.
Yeah, I lost my PCIE_3 slot and M.2 slot thanks to my choice. But I think I'll make do with SATA SSD and CFX is already a non option for me the day I chose building in FM2+ ecosystem.
 

BigDH01

Golden Member
Jul 8, 2005
1,630
82
91
The Northwoods were pretty damn good in their day. The 2.6-2.8C CPUs were champions upon release and a lot of people had them running at 3.5+ GHz. I had my 2.6C running at 3.6 and it was pretty freaking fast at the time.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Northwoods were great chips - 2.4C @ 3.2 here. A64's were better, but they cost a fortune. Would have loved an FX53 or similar as that would have given a huge boost too my UT2004 frame rates but they were silly money, northwoods were cheap and good enough. Mind you cpu's mattered more in those days - we're talking about improving ut2004 32 player online frame rates from mid 30's with my 3.2 Northwood to high 40's with an FX53 - this is mostly independent of graphics settings. UT2004 was a twitch shooter where fps made a huge difference. Now days my old i2500k with a reasonable o/c can still play pretty well everything at about 60fps, that's a 2012 chip in 2018 and it's still competitive.
 

slashy16

Member
Mar 24, 2017
151
59
71
Willamette being an upgrade to the P3-1000, that ran at 2/3rd the clock rate was almost a no brainer. That the Pentium III-S (Tualatin, 512KB L2) would run rings around the 1.4 and 1.5Ghz Willamette chips in anything that wasn't strictly memory bandwidth bound was a bigger joke. While they were not great overclockers, with the right approach, you could get that FSB up to 150mhz with low CL ram and keep it in the game up to the 2.0Ghz chips. The biggest travesty was specifically gimping compatibility with the 440BX chipset, which limited RAM amount and other things. Had Intel decided to make it competitive, it would have been reasonable to adapt it to the faster P4 FSB and better chipsets.

I ran a Pentium III-S well into the P-4 era, only discarding it for a Prescott Dual core at the end of it's life.

To this day I have no understanding why Intel released the Tualatin. if I remember it was to test 130nm and it was then used as the basis for the Centrino which was the most impactful launch Intel has ever had in mobile but, why they decided to compete with themselves made no sense. I guess maybe they wanted to offer a platform that didn't require you to buy Rambus and still get performance.. If you owned a P3-1000 and wanted an update the Pentium4 1.5ghz was a significant upgrade. Tualatin wasn't even an option until well after the P4 launch. Thinking about I think the worst or one of the worst chips Intel ever released was the P4 mobile. I remember those things being horrible and our business scrapping nearly 50 units soon as centrino arrived.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
Northwoods were great chips - 2.4C @ 3.2 here. A64's were better, but they cost a fortune. Would have loved an FX53 or similar as that would have given a huge boost too my UT2004 frame rates but they were silly money, northwoods were cheap and good enough. Mind you cpu's mattered more in those days - we're talking about improving ut2004 32 player online frame rates from mid 30's with my 3.2 Northwood to high 40's with an FX53 - this is mostly independent of graphics settings. UT2004 was a twitch shooter where fps made a huge difference. Now days my old i2500k with a reasonable o/c can still play pretty well everything at about 60fps, that's a 2012 chip in 2018 and it's still competitive.
Pretty much my sentiments about the P4 Northwood, I was playing semi competitively back in the mid noughties and was getting fps dips into the 30s in BF2, which ultimately forced my hand into getting a C2D, which was a massive upgrade it has to be said, though to be fair by that point the P4 was almost 4 years old.

I can't say I got the same longevity from my 2500K as you though, for me it fared about as well as the P4, I got about 4 decent years out of it, and it was actually another BF game (BF1) that made upgrade the 2500K (to a 3770K, drop in upgrade) as again I was getting choppinsss in big multi-player maps
 

whosjohnny

Junior Member
Jun 10, 2007
10
0
61
Pentium (I) 60mhz was the worst CPU ever made. It ran on 5 volts without a fan. If you touch it, you'll get a first degree burn because it can fry an egg. It was the last CPU that did not officially need a fan but f me, i burn myself being the curious George I was on my friend's $6,000 brand new computer.

Of the list, 80286 was probably the WTF short lived unnecessary computer because 80386 took over quickly with 32-bit processing power. 80486 DX with math-co was really something in its days.... love that beast, ran AutoCAD beautifully.
 

BigDaveX

Senior member
Jun 12, 2014
440
216
116
To this day I have no understanding why Intel released the Tualatin. if I remember it was to test 130nm and it was then used as the basis for the Centrino which was the most impactful launch Intel has ever had in mobile but, why they decided to compete with themselves made no sense. I guess maybe they wanted to offer a platform that didn't require you to buy Rambus and still get performance.. If you owned a P3-1000 and wanted an update the Pentium4 1.5ghz was a significant upgrade. Tualatin wasn't even an option until well after the P4 launch. Thinking about I think the worst or one of the worst chips Intel ever released was the P4 mobile. I remember those things being horrible and our business scrapping nearly 50 units soon as centrino arrived.

I think Tualatin was primarily intended for the Pentium III-M mobile line. Because of where battery technology was at the time, Pentium 4 used too much power to get more than a couple of hours or so of battery life in a notebook, so Intel created the Pentium III-M essentially as a stopgap until Pentium M/Centrino was ready.

IIRC, some rack server manufacturers also made use of Tualatin, since the Willamette-derived Xeons of the time had absolutely terrible watt-efficiency and were harder to cool.
 

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
BigDaveX has it. The first gen P4 design was hot and didn't do so hot at typical server loads of the day. Intel needed something that provided improved performance on the lower end of the power envelope for mobile and high density servers. The Pentium-M and Pentium-S were their solutions. Both were based on the same core, with up to 512KB L2 cache that was running at die speed. They were specced out to 1.4Ghz I believe, with a 133Mhz FSB. If you did your homework, you could get the FSB up to 150Mhz and keep the core stable, which would get you most of the way to the memory throughput numbers for the quad pumped 100Mhz FSB of the early P4 chips. Coupled with the larger L2, they weren't really hamstrung by the slower memory bandwidth. On mobile, they gave much improved battery life, and thermals, for laptops as compared to P4-m, and only really struggled in certain FPU/SSE intensive situations. It turned out to be a smart move for them as it later evolved into Centrino/Core1/Core2.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,691
136
I had a celeron back in the WinME days.
never doing that again. Better to get a pentium and underclock it if you wanna save battery life or reduce heat.

There is your problem right there. The unmentionable OS was a dumpster fire in the stability department.

Only problem with the Athlon X4 845 is that it's a laptop CPU in a desktop package. A lot of people don't know this but the PCIe interface on Carrizo was cut down more than you think, not only does the chip have half the PCIe 3.0 lanes but the extra four PCIe 2.0 lanes coming off of Kaveri were also completely removed. A lot of FM2+ boards used those four PCIe 2.0 lanes for an extra PCIe slot on the board; when you install an Athlon X4 845 in those boards you lose the ability to use that slot.

Correct. Carrizo*1 only has 12 lanes to use. 8x for the graphics slot, 4x for FCH and 2 GPP lanes. But an x8 PCIe 3.0 connection isn't going to bottleneck anything. The performance penalty is only 1-2% compared to a full x16 link.

Some boards*2 have lanes from the FCH, so for a productivity system you can run the graphics card from the FCH and put a PCIe SSD in the graphics slot.

*1 Bristol Ridge has the same limitation, which is why most AM4 boards when used with such only provides a x2 PCIe connection for the primary M.2 slot.
*2 F.x. the Asrock A88M-G/3.1.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,083
1,059
136
I think any Celeron processor sucks and any AMD processor After the Intel Core2Duo architecture was released prior to Ryzen. I had an AMD 1800+ than a Barton 2500+ than AMD 64 and then AMD x2 64 up to I think 4200+ on the 939 socket. After that I had a Q6600 in either late 2007 or very early 2008 that lasted me until 2013. That Core2Quad had 8GB of ram since 2009. My 3570K arrived in mid 2013 and now I am pondering the future of my next system.

I cannot believe how crappy computers were up until the Core2Duo era. It's like I am stuck in a time warp. Where have all the years gone?
 

Paratus

Lifer
Jun 4, 2004
16,600
13,272
146
Prescott / Smithfields, were the worst cpu intel has ever made.
They even admitted this, and went back 1 generation to the P3 arch "dolthan" which later turned into the ever glorious Core arch.

Anyone that owned a prescott / smithfield knew them as another name... "space heater" because they ran obnoxiously hot, required insane amount of clock speed to get netburst somewhat working, and even made hyper threading usless until it was released on nehalem arch, aka, the first generation i7's.

Space heater was the Windows name I gave my P4 Prescott. :D

I kept the name for my i7 920. Now I need a new name for my Threadripper 1900X.