• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

Worst CPUs ever, now with poll!

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

What's the worst CPU ever? Please explain your choice.

  • Intel iAPX 432

  • Intel Itanium (Merced)

  • Intel 80286

  • IBM PowerPC 970

  • IBM/Motorola PowerPC 60x

  • AMD K5

  • AMD family 15h

  • AMD family 10h

  • Intel Raptor Lake


Results are only viewable after voting.

funboy6942

Lifer
Nov 13, 2001
15,368
418
126
I have a bulldozer running at 4.3 on air and stock volts, paired with a set of HD6870's and I get very respectable frame rates with my system. I dont think in the least the cpu is a bad one, Im very happy with what I bought. I can play all my games maxed out with no lag or under 30fps.
 

cubby1223

Lifer
May 24, 2004
13,518
42
86
Please.

The 486SLC should be higher on the list of worst CPU ever than a Prescott.

How many people bought a pre-built system with a 486SLC thinking it was the same performance as an intel 486DX just at a lower cost by not buying the intel brand name.
 
Last edited:

Matt1970

Lifer
Mar 19, 2007
12,320
3
0
The worst chip ever was Cyrix PR233 and the slower ones. I had a stack of them that died in customer PC's that were brought to me. They had an absulutely horrible failure rate.
 

kool kitty89

Junior Member
Jun 25, 2012
15
0
0
The worst thing about the P4 IMO is that the first Willamettes failed to beat the PIII in many scenarios while costing way more, and also everyone who adopted the socket 423 got screwed in multiple ways (1. intel abandoned it 2. intel released Tulatin which made the PIIIs even faster) all while being WAY more expensive
There's a few more issues here too.

The P4 itself may not have been horrible, but what's worse is the wasted potential Intel forced on the P6 architecture by pushing Netburst so hard. The late gen PIII and Celeron parts were bottlenecked by the old SDR bus, had they transitioned to S423/478 or had S370 been updated with a DDR (let alone QDR) FSB things would have been different.
As it was, even the Tualtin parts ended up far less competitive with K7 and Netburst parts due to the aging FSB.

And that's just the bus/RAM, not getting into the potential for enhancement/redesign of the P6 microarchitecture as eventually happened with Pentium M and subsequent Core. And when they DID extend it via Pentium M, they made it exclusive to mobile platforms. (aside from special desktop socket-M boards and Asus's adapter)


To a much, much lesser extent, some of the same comments could be made on AMD with the late gen K6-based parts. There was probably a fair bit more potential for them on the likes of Socket A as decent entry level processors (and a longer life in the notebook market), not to mention larger caches and possible tweaks to allow for higher clock speeds.
The short pipeline may have been the limiting factor, but power distribution of its 5-layer interconnect likely also played a role, as was certainly the case for the Athlon stalling early on at 130 nm.
The relatively small and simple core logic size of the K6 meant it could have far more added cache packed onto the same die as contemporary K7 (or P6 -let alone Netburst) parts. The K6-III's 118 mm2 250 nm die (with 64k/256k L1/L2) vs 184 mm2 for the 250 nm Athlon (128k L1).

Bulldozer IMO has had a better launch than P4 in spite of the fact that it doesn't outperform its previous gen in flying colors. At least it's a cheaper platform and has some potential, and works with the existing ram/mobo (some of them at least)

It's more the launch of these CPUs that I think was a fail, P4 developed into a great workhorse with Northwood and some of the Pentium dual cores were awesome despite the heat, and I'm sure bulldozer could develop into an OK cpu too, time will tell
Plus, AMD isn't shoving Bulldozer down peoples throats like Intel did with the P4, but instead making the reasonable decision to continue mainstream support for their previous architectures, and is doing so without putting artificial restrictions to make those parts less attractive. (unlike PIII/Celeron vs P4 -or PM vs P4 for that matter)

Similarly, the AM3+ platform supports AMD's mainstream K10 based parts and uses industry standard DDR3. (unlike Intel's P3-P4 transition)






Maybe it was popular and good value for money. But clearly, Pentium 4 was not the gamers choice. In fact, AMD was much faster per clock and used less power at the same time.

That's only true for some games though, there was a lot of back and forth for P4 vs Athlon biased games, be it actual bias for the P4's internal architectural features or (more commonly) more demanding memory bandwidth requirements. That's, of course, comparing contemporary mainstream P4 and Athlon parts with memory to match. (ie not late model 400 MT/s P4s or similar late-gen P4 systems bottlenecked with single-channel DDR against 333/400 FSB Athlons with DDR 333/400)

Serious Sam is definitely one of those Athlon-biased games too.

Of course, that's not to knock the Athlon's (K7 or K8) performance, let alone price/performance value.








Where is the good old VIA C3? Yes I know it is a renamed Cyrix III.
Lots of people used them for low power machines before Intel launched the Atom processor and the C3 was horrible at everything.
The only arguments for buying a C3 was that it was passive and the only mini-itx choice.
The Atom is a much better low-end/low-cost/low-power niche design though . . . the Winchip based "Cyrix III" or C3 wasn't very good even in that role.

The die size and yields were little or no better than contemporary K6-2/III+ parts or Celerons for that matter (especially compared to those 180 nm K6 parts), and performance was far worse. (aside from very heavily I/O bound operations, where the 133 MHz FSB C3 with relatively decent front-end performed rather well -and even there the Celeron was artificially limited by locking and K6 by the Socket 7 form)

It makes you wonder if the cancelled enhanced Cyrix M2 based original ("Joshua") Cyrix III was nearly as bad as reports made it out to be. (granted, I doubt it would have been much/any better than the K6 based parts, but that still leaves a huge margin beyond the actual C3 parts released)
 

pantsaregood

Senior member
Feb 13, 2011
993
37
91
There's a few more issues here too.

The P4 itself may not have been horrible, but what's worse is the wasted potential Intel forced on the P6 architecture by pushing Netburst so hard. The late gen PIII and Celeron parts were bottlenecked by the old SDR bus, had they transitioned to S423/478 or had S370 been updated with a DDR (let alone QDR) FSB things would have been different.
As it was, even the Tualtin parts ended up far less competitive with K7 and Netburst parts due to the aging FSB.

DDR and RDRAM didn't really benefit Tualatin much. i820, i820E, and i840 all supported RDRAM and didn't show any real performance gains. There were also a few DDR chipsets produced by VIA, I believe. Same effect there.

Banias was extremely closely related to Tualatin - moreso than Prescott was related to Willamette, probably. SSE2 was implemented, and the QDR bus used for Pentium 4 was adapted to Pentium M. The original "Core" CPUs were a similar step forward from Dothan, followed by the evolution into Core 2.

I don't, however, know how closely (if at all) Core 2 relates to Nehalem, or how closely Nehalem relates to Sandy Bridge. Pentium Pro, Pentium II, Pentium III, Pentium M, Core, and Core 2 are clearly iterations of the same P6 architecture. I don't know if Nehalem and Sandy Bridge fit in that lineage at all.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
I don't, however, know how closely (if at all) Core 2 relates to Nehalem, or how closely Nehalem relates to Sandy Bridge. Pentium Pro, Pentium II, Pentium III, Pentium M, Core, and Core 2 are clearly iterations of the same P6 architecture. I don't know if Nehalem and Sandy Bridge fit in that lineage at all.
They're definitely related. Personally, I can see the similarities between a Sandy Bridge die and P6 based processor dies all the way back to Banias. Man has the complexity increased, though.
 

lamedude

Golden Member
Jan 14, 2011
1,230
68
91
The short pipeline may have been the limiting factor, but power distribution of its 5-layer interconnect likely also played a role, as was certainly the case for the Athlon stalling early on at 130 nm.
Looking at the 7 stage pipeline PPC 7450/G4e I'm going to say moar layers wouldn't have been enough. Wikipedia says overclockers got 800 MHz at best on the K6-3+.
Wikipedia also says "For a time, the K6-III was a low priority part for AMD—something to be made only when all orders for high-priced Athlons and cheap-to-produce K6-2s had been filled—and it became difficult to obtain in significant quantities." A Celeron like K6 might have been made if that darn Athlon wasn't so succesful.
 

inf64

Diamond Member
Mar 11, 2011
3,884
4,692
136
Anyway, look at it like this:

The FX-8150 performs near identically to a Core 2 Quad Q8400 - a CPU that's now several generations old. The FX-8150 is slower than a Q9750 in almost all benchmarks. A 3770K - a CPU that Bulldozer is expected to compare to - performs around twice as fast as a Q9750. It just isn't feasible for AMD to refine Bulldozer enough to gain >100% performance.

Netburst couldn't compete with K8 - absolutely no one in their right mind will deny that. Top of the line Netburst CPUs, however, were more than half as fast as the AMD CPUs they were expected to rival.

A Core i3-3240 should be a pretty decent match for an FX-8150 when it gets released. A mid-range CPU can keep up with AMD's "best." A Sempron 3800+ would have a hard time managing against a 3.46 EE, 3.73 EE, or 3.8 E.

I missed this "post" for which I'm truly sorry :\. Let's use some hard facts and not fiction,shall we.
Hard facts can be found here(Hardware.fr review of IB CPUs). In the chart there is no Q8400 nor Q9750(what model is this exactly,maybe you meant Q9650?). But there are QX9770 and Q9650 which are both faster than those 2 CPUs.So comparison is even better with these CPUs.

The FX-8150 performs near identically to a Core 2 Quad Q8400 - a CPU that's now several generations old.
The FX-8150 is slower than a Q9750 in almost all benchmarks.
3d Studio Max 2011 - Mental Ray (seconds)
FX8150-848
Q9650-1305
Q9770-1216

FX8150 is 31% faster than Q9770 and 35% faster than Q9650.

3d Studio Max 2011 - V-Ray 2.0
(seconds)
FX8150-288
Q9650-485
Q9770-455

FX8150 is 37% faster than Q9770 and 41% faster than Q9650.

Visual Studio 2010 SP1
(seconds)
FX8150-215
Q9650-321
Q9770-299

FX8150 is 28% faster than Q9770 and 33% faster than Q9650.

MinGW / GCC 4.5.2(seconds)
FX8150-467
Q9650-687
Q9770-642

FX8150 is 28% faster than Q9770 and 32% faster than Q9650.

7-zip 9.2
(seconds)
FX8150-467
Q9650-792
Q9770-740

FX8150 is 37% faster than Q9770 and 41% faster than Q9650.

WinRAR 4.01
(seconds)
FX8150-441
Q9650-517
Q9770-490

FX8150 is 10% faster than Q9770 and 15% faster than Q9650.

StaxRip - x264 build 2085
(seconds)
FX8150-441
Q9650-623
Q9770-582

FX8150 is 25% faster than Q9770 and 30% faster than Q9650.

MainConcept Reference 2.2 H264 Pro
(seconds)
FX8150-465
Q9650-782
Q9770-735

FX8150 is 37% faster than Q9770 and 41% faster than Q9650.

Adobe Lightroom 3.4
(seconds)
FX8150-331
Q9650-466
Q9770-435

FX8150 is 24% faster than Q9770 and 29% faster than Q9650.

Bibble 5.2.2
(seconds)
FX8150-265
Q9650-422
Q9770-394

FX8150 is 33% faster than Q9770 and 38% faster than Q9650.

Houdini 2.0 Pro
(Knodes/s)
FX8150-9542
Q9650-6785
Q9770-7290

FX8150 is 31% faster than Q9770 and 40% faster than Q9650.

Fritz Chess Benchmark 4.3
(knodes/s)
FX8150-11869
Q9650-8586
Q9770-9154

FX8150 is 30% faster than Q9770 and 38% faster than Q9650.

Crysis 2 v1.9
FX8150-38.5
Q9650-39.8
Q9770-36.8

FX8150 is 3.3% slower than Q9770 and 4.6% faster than Q9650.

Arma II : Operation Arrowhead v1.59

FX8150-27.7
Q9650-26.7
Q9770-25.2

FX8150 is 3.7% faster than Q9770 and 10% faster than Q9650.

Rise Of Flight

FX8150-20.6
Q9650-22.5
Q9770-21.1

FX8150 is 8.5% slower than Q9770 and 2.4% slower than Q9650.

F1 2011

FX8150-59.7
Q9650-59.9
Q9770-55.5

FX8150 is equal in perf. to Q9770 and 7.5% faster than Q9650.

Total War : Shogun 2

FX8150-9.2
Q9650-11
Q9770-9.8

FX8150 is 16.7% slower than Q9770 and 6% slower than Q9650.

Starcraft II v1.3.6

FX8150-6.9
Q9650-6.4
Q9770-6

FX8150 is 7.8% faster than Q9770 and 15% faster than Q9650.

Anno 1404 v1.3

FX8150-32.3
Q9650-31.2
Q9770-28.8

FX8150 is 3.5% faster than Q9770 and 12% faster than Q9650.

As can be seen,the performance delta between FX8150 and fastest C2Qs in applications is huge. FX is >30% faster almost uniformly. In games it's somewhere faster and somewhere slower,depending on the game.

In average numbers for application workloads("Moyenne" page in article):
Moyenne applicative:
FX8150-150.7
Q9770-108.2
Q9650-101.2

and for reference 3770K-183.7

FX8150 is massively faster than both Q9770 and Q9650(40+% above both) .That is a massive performance delta. It's not even close to (and I quote):
pantsaregood said:
The FX-8150 performs near identically to a Core 2 Quad Q8400 - a CPU that's now several generations old. The FX-8150 is slower than a Q9750 in almost all benchmarks.
If we were to have Q8400 in the review,it would be even uglier(it's 13% slower than Q9650: 2.66Ghz vs 3Ghz). So Q8400 would score around 101.2/1.13=90 points in hardware.fr review.
So the facts tell us that FX8150 is 150.9/90=1.676 or ~68% faster than poor Q8400 in desktop applications. This of course is nowhere near to your nonsensical statement of "near identically to a Core 2 Quad Q8400" nor anywhere close to equally untrue "slower than a Q9750 in almost all benchmarks" (since it's exactly 40% faster than 3.2Ghz Q9770).

Compared to top IB 3770K,in applications FX8150 is 183.7/150.7=1.22 or 22% slower ,stock vs stock. This is not bad at all since we have "slow" 32nm FX "8core"(which actually has 4 floating point,8 threaded subunits) versus 4C/8T IB @ 22nm. 22% slower in desktop workloads is peanuts gap and can be closed with Vishera with little to no problems.

As can be seen from above,facts are facts and fanboy fiction is fanboy fiction. Let's stick with the facts and this forum may be regarded(one day) as a serious tech forum,not a fanboy lounge .
 
Last edited:

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
I'd vote for the cacheless Celerons. If they weren't so horrible, Intel wouldn't have put out the 300A two months later.

I don't care what you could do with overclocking on the cacheless Celerons... the "A" models were infinitely better and just as easy to overclock.
 
Last edited:

pantsaregood

Senior member
Feb 13, 2011
993
37
91
Actually, the P4 w/ Rambus & the 850E chipset was a very solid performer in its time. The only drawback was the higher cost.

Don't bother trying to reason, people remember Netburst as the most awful thing to ever exist.

Also, my Q8400 comparison was just wrong. Forgive me. I was trying to point out that the FX-8150 does a better job of rivaling hardware from 2008 than it does rivaling hardware from today.
 

Evadman

Administrator Emeritus<br>Elite Member
Feb 18, 2001
30,990
5
81
If I had to pick one of anything, it would be the P4. It was a step backwards from the PIII, and could not compete with AMD's offerings at the time. Lengthening the pipeline was the correct long term strategy, but the P4 parts were released too early when they could not compete.
 

serpretetsky

Senior member
Jan 7, 2012
642
26
101
Made an interesting wPrime comparison, the other day. This is one test, where a 90nm Prescott [Intel Pentium 4 506 2.66 GHz] totally blows to a 130nm Tualatin [Intel Pentium III 1400, tb1 stepping]. I had it slightly overclocked.. but still, a very respectable result. Consider just, that Tualatin was released in 2001 and that Pentium 4 in 2005. And still, looks like the original poster has found no room for it in the poll? Oh dear.


i see an overlocked pentium3 losing to a stock pentium 4. Did i misunderstand your post?
 

pantsaregood

Senior member
Feb 13, 2011
993
37
91
If I had to pick one of anything, it would be the P4. It was a step backwards from the PIII, and could not compete with AMD's offerings at the time. Lengthening the pipeline was the correct long term strategy, but the P4 parts were released too early when they could not compete.

Northwood A/B/C and Athlon XP were running dead even from 2002-2003. When Intel broke 3.2 GHz, Athlon XP could no longer keep up. Athlon 64 outperformed every Pentium 4, but AMD wasn't running the show until then.

Also, your information is incorrect. Prescott was released in early 2004, not 2005. Furthermore, you're comparing an overclocked Pentium III (running faster than the highest stock PIII) against the second-slowest Prescott "A". If you want to see a real show of progress for those three years, compare a 1.4 GHz Tualatin to a 3.8 GHz Prescott 672.
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
234
106
i see an overlocked pentium3 losing to a stock pentium 4. Did i misunderstand your post?

Also, your information is incorrect. Prescott was released in early 2004, not 2005. Furthermore, you're comparing an overclocked Pentium III (running faster than the highest stock PIII) against the second-slowest Prescott "A".

That Pentium 3 lost by ~7% in that particular test (Take the overclock off, you get 15% = fine), you've got that right, but the P4 has a ~76% (90% if comparing stock P3) clock advantage and not only that... The P3 is rated at about 32W TDP versus 84W the P4 has. So in that particular test, we have.... a 2004/2005 chip that is 7% faster, but clocked 76% higher, and double the power consumption. Proving, how inefficient the Netburst design was.

If you want to see a real show of progress for those three years, compare a 1.4 GHz Tualatin to a 3.8 GHz Prescott 672.
672 will still be an inefficient turd, consuming ~4x of more power. I'll update my post when I get one.
 
Last edited:

serpretetsky

Senior member
Jan 7, 2012
642
26
101
That Pentium 3 lost by ~7% in that particular test (Take the overclock off, you get 15% = fine), you've got that right, but the P4 has a ~76% clock advantage and not only that... The P3 is rated at about 32W TDP versus 84W the P4 has. So in that particular test, we have.... a 2004/2005 chip that is 7% faster, but clocked 76% higher, and double the power consumption. Proving, how inefficient the Netburst design was.


672 will still be an inefficient turd, consuming ~4x of power. I'll update my post when I get one.
yeah, i can see where your coming from. I think i would try to spell it out for the readers.

Was that pentium 3 and that prescott priced the same when they released?
pantsaregood said:
Also, your information is incorrect. Prescott was released in early 2004, not 2005. Furthermore, you're comparing an overclocked Pentium III (running faster than the highest stock PIII) against the second-slowest Prescott "A". If you want to see a real show of progress for those three years, compare a 1.4 GHz Tualatin to a 3.8 GHz Prescott 672.
you sure you weren't confusing some of my quote of magic carpet with evadman? I dont think evadman mentioned anything about the year prescott was released or gave any examples/comparisons in this thread.
 

kool kitty89

Junior Member
Jun 25, 2012
15
0
0
I'm seeing several anecdotal accounts of slow/poor performing and unreliable systems blaming the CPU, especially on the Cyrix comments (and many similar examples on some older bad/worst CPU discussions).
This isn't really a valid or fair metric for "bad" CPU since many of those systems were cheaply/poorly/inefficently built (or configured) pre-built systems that weren't limited primarily by the CPU but by a clutted OS install, crappy (or simply mismatched) motherboard, slow hard drive, too little RAM, etc.

In the specific case of some late gen cyrix chips (like the MII 333 and 366), I know eMachines commonly used them along with POS PCChips motherboards and the typical bloated installs of those machines. Though, to be fair, on a decent SS7 board, those Cyrix parts were still a bit overrated at 333 and 366, even for the business/office apps they were markted towards (Cyrix had declined heavily under National Semiconductor and become a bit despirate at that point). The 333 was really closer to a 300 and 366 a 333 or 350 for business apps (compared to PII/Celeron and K6 parts) and certainly much further behind in gaming performance. (the 262/75 MHz 333 variant was more comparable to a Pentium MMX 233 for gaming, or a bit worse -kind of the polar opposite to the cachless celerons which were poor for buiness but a good value for gaming -great if overclocked)

The same kinds of things would apply to crappy L2 cacheless 486 machines using otherwise good CPUs . . . or L2 cachless 486SLC machines posed as 486SX/DX class systems. (which in reality were little better than cacheless 386SX systems of the same clock speed -or worse if other hardware was weaker- . . . perhaps better for some 3D games of the time where the cache and faster computational performance would make a difference)
AFIK, the only L2 cacheless 386SX bus/board based part with anything near 486 class performance was IBM's clock doubled 66/33 MHz 486SLC with 16 kB on-chip cache (to the Cyrix's 1 kB).

OTOH, the 386DX counterpart to the Cyrix SLC, the 486DLC had realistic 486 class performance. (obviously no FPU, but prior to Quake that was a total non-issue for 99% of users -ie aside from CAD and 3D workstation stuff) IBM's 486DLC pushed things a good bit further though (16k cache and clock doubled and trippled), there's the rare clock-doubled Cyrix 486 DRX-2, but that was still limited to a 1k cache and was so late it didn't much matter.




DDR and RDRAM didn't really benefit Tualatin much. i820, i820E, and i840 all supported RDRAM and didn't show any real performance gains. There were also a few DDR chipsets produced by VIA, I believe. Same effect there.
None of those changed the FSB architecture of the platform . . . it was still tied to the SDR 66/100/133 (or a bit more if overclocked) FSB of the S370 platform.
RDRAM was actually worse than PC133 SDR in most cases due to the high latency on top of zero gain in peak throughput (same FSB bottleneck). DDR on S370 would be rather pointless for the same reason.

Had the Athlon/Duron been released on a Socket 7 or Socket 370 style SDR bus platform, they'd have been crippled too.


Looking at the 7 stage pipeline PPC 7450/G4e I'm going to say moar layers wouldn't have been enough. Wikipedia says overclockers got 800 MHz at best on the K6-3+.
Yes, the existing 180 nm 2+ and III+ were only officialy rated up to ~570 MHz (more commonly 450 for the III+, and 500 or 550 for the 2+) and tended to overclock consistently to ~600 MHz or a bit beyond, with more extreme cases nearing 800 MHz. Note, it wouldn't be possible to go beyond 600 MHz using the default 100 MHz FSB as 6x multiplier is the maximum, so higher speed overclock the FSB, which can lead to other problems with the chipset and/or overclocked PCI/AGP bus. (800 MHz would need 133 FSB, which very few boards supported, all of which overclocked 100 MHz rated chipsets, 120/124 MHz support was much more common but still not always stable for those chipsets, 112/115 MHz was usually fine though)

Wikipedia also says "For a time, the K6-III was a low priority part for AMD&#8212;something to be made only when all orders for high-priced Athlons and cheap-to-produce K6-2s had been filled&#8212;and it became difficult to obtain in significant quantities." A Celeron like K6 might have been made if that darn Athlon wasn't so succesful.
Yes, hence why I said much less extreme than the P3/P4 situation of the same time.

That Wiki quote pertains to the original K6-III (250 nm) which was competing with Athlon and K6-2 production (and K6-III dies took 118 mm2 to 81 mm K6-2 and 184 mm Athlon). It lacked the gaming prowess or marketing hype of the K7, so it was both lesser known and lower-priority for AMD, but was a very good choice for general users and as a SS7 upgrade part (significantly faster in most apps than a K6-2 of the same clock speed) and also happened to score quite well in server benchmarks of the time.

The K6-III was on the scene durring the big CPU shortage of '99/2000, so the priority for K6-2 and K7 production was even heavier still. By the time the 180 nm K6 2+/III+ parts were out, the shortage was over and the smaller dies also meant much better cost effectiveness. However, retail/dealer availability was limited for those (officially notebook) parts, though they were available wholesale in most cases. (so dealers in the know usually had access to them)
Here's an interesting article from the perspective of general desktop office/business multitasking on the K6-III and III+.
http://redhill.net.au/c/c-e.html (apparently a very good performer for multitasking at 560/112 MHz on a FIC-503+ with 1 MB board level cache)

Had AMD pushed K6 development more, it really would have only catered to the notebook (beyond what the SS7 based K6+ parts already did) and the entry level office desktop niche (especially for less FPU-intensive applications) . . . probably much more popular with home builders and small dealers than OEMs. (as the K6-2+/III+ ended up historically) The Duron obiously catered better to the mainstream gaming/multimedia performance side of things.

The tiny die of the K6 based parts would have allowed for a considerable increase in the L2 cache beyond 256k while remaining as small or smaller than the Duron, though even then it would have been limited to the notebook/entry level buiness/office niche and perhaps low power servers. (more so if it could have been tweaked to higher clock speeds)
 
Last edited:

pantsaregood

Senior member
Feb 13, 2011
993
37
91
Oh yeah, I see what you're saying about the FSB architecture. That was the point I was trying to get at. I have always wondered how DDR would affect a dual 370 system, though.

Also, no one thinks Netburst was efficient. It did eventually manage to push performance, but it certainly wasn't efficient. Single core Netburst CPUs eventually began pushing 115W.

Netburst wasn't efficient, but it was competitive in performance until K8 hit. Netburst just went about increasing performance by a different metric than P6 did. It just happened to run into a wall. If Netburst had scaled to 10 GHz as intended, it would've been quite a performer. A 45nm Pentium D at 10 GHz with a 1600 MHz FSB? Yeah, sounded great until they hit that thermal wall.
 

SolMiester

Diamond Member
Dec 19, 2004
5,330
17
76
Gee, and no one wants to mention the cacheless celerons? shame on you nerds :p

though in the areas of "oops" which I think would make an honourable mention would be the Pentium D and it's hype of being a dual core when it was just two high powered cpus in the same package (sharing the FSB just like normal dual cpu units).

or the pentium bug that intel down played as being "pointless" to the masses.

This
 

pantsaregood

Senior member
Feb 13, 2011
993
37
91
Assuming performance scales linearly with clock speed and added L2 cache, FSB speed increase, and HT don't improve performance, a 672 should finish 32M wPrime in about 83.8596491 seconds. A 1.4 GHz Tualatin-S should do the same in around 138.64552 seconds.

Pushing theoretical calculations further, a 1.4 GHz Prescott should finish in about 227.619047 seconds. Prescott is 64% slower than Tualatin per-clock.

SSE2, SSE3, increased bandwidth, increased cache, and HT should close that gap by a reasonable degree, however.
 

inf64

Diamond Member
Mar 11, 2011
3,884
4,692
136
@ pantsaregood

I assume your lack of answer to my post shows you silently agree that you made a nonsensical post about Bulldozer? Facts are facts after all.
 

Centauri

Golden Member
Dec 10, 2002
1,631
56
91
Why on Earth is anything from the 60x family an option on this poll? Let alone all of them ignorantly grouped together as one...
 

moonbogg

Lifer
Jan 8, 2011
10,731
3,440
136
aMd family 15/bulldozer (name doesn't deserve all caps anymore).

If ever there was a time when AMD needed to really step up their game in order to hang on to even the faintest memory of being competitive in the performance segment, it was with family 15. Need I mention they failed? Being out performed by their OWN previous cpu line in several benchmarks, when that previous line itself was already dismal, engraved the writing on the wall for AMD. This is reflected by the unanimous recommendations Intel CPUs enjoy when a platform is inquired about for gaming/enthusiast uses.
Any further progress Intel makes will only harden the cement surrounding aMd's grave.
 
Last edited: