Worst CPUs ever, now with poll!

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

What's the worst CPU ever? Please explain your choice.

  • Intel iAPX 432

  • Intel Itanium (Merced)

  • Intel 80286

  • IBM PowerPC 970

  • IBM/Motorola PowerPC 60x

  • AMD K5

  • AMD family 15h

  • AMD family 10h

  • Intel Raptor Lake


Results are only viewable after voting.

kool kitty89

Junior Member
Jun 25, 2012
15
0
0
Another interesting mention for "bad" (if not "worst) x86 CPUs would be Cyrix's MediaGX design, but perhaps mostly for less obvious reasons than the actual performance limitations. (ie the limited bus, cache -no L2 support-, internal performance, etc -general low-end embedded niche . . . too bad there wasn't a netbook market then ;))

But the bigger picture is what it ended up leading Cyrix to:
http://redhill.net.au/c/c-8.html
A weird and innovative design, the MediaGX arrived in 1997. It was an all-in-one device combining CPU, memory controller, graphics card and PCI controller on a single chip. In its success, it destroyed an entire company.

Because it seemed to have so much potential in the low-cost market, as a set-top box component in particular, it dragged Cyrix's attention away from the main market — orthodox high-performance desktop parts — and attracted the interest of other companies, notably National Semiconducter, which bought Cyrix largely on the strength of the MediaGX design, and over the next year or so proceeded to mismanage the company into oblivion.

→ A very unusual way of mounting a CPU. Yes, it's just as thin and flat as it looks in the picture. It's a MediaGX-166 from a Compaq Presario P2200.

The MediaGX was developed by Cyrix's second design team, the same team that had produced the 5x86, as a low-cost component for mass-market home systems. With a MediaGX-based system, the video card and sound card functions were performed on the CPU itself. This resulted in a cheap and reasonably well-performed system, but it was non-standard and rather restrictive.

The single-chip motherboard was unique to the MediaGX and couldn't be chip-upgraded to a Pentium or 6x86, and the built-in sound and graphics prevented these from being upgraded too. In short, the MediaGX was mainly of interest to brand-name manufacturers selling cheap and underpowered systems to first-time buyers through the supermarket outlets. Amstrad and Commodore were both defunct by this time, but they would have loved it.




Also, no one thinks Netburst was efficient. It did eventually manage to push performance, but it certainly wasn't efficient. Single core Netburst CPUs eventually began pushing 115W.

Netburst wasn't efficient, but it was competitive in performance until K8 hit. Netburst just went about increasing performance by a different metric than P6 did. It just happened to run into a wall. If Netburst had scaled to 10 GHz as intended, it would've been quite a performer. A 45nm Pentium D at 10 GHz with a 1600 MHz FSB? Yeah, sounded great until they hit that thermal wall.
In the Nortwood era, it actually seemed to be becoming more sensible too as an overall competitive design (Intel pricing aside) . . . presscott obviously changed that though.

I've heard lots of different reasoning on just what went wrong with Presscott, but one question that I haven't seen addressed is: Why didn't Intel try a die-shrunk Northwood? Depending on the absolute root cause of the Presscott's problems, a 90 nm NW may have been no better, but had it scaled well in power consumption (as the Pentium M and AMD's chips, etc did), that would have been a huge boon for the late-gen P4.

On another note on the P4/Netburst in general, here's a couple interesting articles from a programming (and somewhat hardware-design) perspective:
http://www.azillionmonkeys.com/qed/cpujihad.shtml
http://www.emulators.com/pentium4.htm


Assuming performance scales linearly with clock speed and added L2 cache, FSB speed increase, and HT don't improve performance, a 672 should finish 32M wPrime in about 83.8596491 seconds. A 1.4 GHz Tualatin-S should do the same in around 138.64552 seconds.

Pushing theoretical calculations further, a 1.4 GHz Prescott should finish in about 227.619047 seconds. Prescott is 64% slower than Tualatin per-clock.

SSE2, SSE3, increased bandwidth, increased cache, and HT should close that gap by a reasonable degree, however.
It all depends whether a task is IO/RAM bandwidth intensive or not . . . programs/processes that aren't I/O-bound on a slower bus will obviously have no gains with a similar processor on a faster bus and/or with faster RAM. (ie Celeron 733 vs P3 733, etc)




Why on Earth is anything from the 60x family an option on this poll? Let alone all of them ignorantly grouped together as one...
Yes, there's several other odd options up there too, and many others more disserving not there.
 
Last edited:

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
I'm going to say Pentium D and Bulldozer although Bulldozer actually could be decent on a good 28nm or low process, provided that they used lower latency L2 and L3 cache (both about 1/5 the latency of what it is now with the L3 cache maybe boosted in clock speed to about 2.6GHz) and kept non-turbo clock speeds at 3 GHz. Perhaps they could make turbo mode one module disabled with the rest at 3.3 GHz.
 

pantsaregood

Senior member
Feb 13, 2011
993
37
91
I saw a release benchmark from Tom's Hardware yesterday about Prescott, actually. It wasn't "awful" compared to Northwood as some people seem to think. From what I gathered, it generally raised HT performance, lost out on about 1/3 of benchmarks, won about 1/3 of benchmarks, and tied in the rest with an equivalently clocked Northwood.

In 2012, Prescott would likely outperform Northwood in almost everything due to improved multithreading support and SSE3.

Also, Pentium D was just bad. Athlon 64 came out of the gate rivaling Intel's high-end CPUs - the 3200+ (the first 754 Athlon 64 released) was beating the 3.2 GHz Emergency Edition in a fair amount of benchmarks. Intel and AMD were both releasing CPUs in 200 MHz increments, but 200 MHz meant a lot more on K8 than Netburst. Once faster units like the 3500+, 3700+, 3800+, and 4000+ began coming out, Intel had no chance of competing. Athlon 64 X2 was a true dual-core design, while Pentium D was a pair of Prescotts taped together. That wouldn't have been so bad if Prescott wasn't horribly outpaced by Venice and San Diego.

The Athlon XP 3200+ had a very generous PR rating. The 3.2 GHz Northwood beat it almost all of the time.
The Athlon 64 3200+ had an accurate rating. It was, overall, fairly even with a 3.2 GHz Prescott.
The Athlon 64 3800+ was very generous to Intel. A Prescott 570 could only keep pace when programs were well-multithreaded.

Looking back, the Athlon 64 X2 was better competition for Core 2 Duo than Pentium 4 was for Athlon 64. I'm surprised Intel didn't release Core Duo on LGA 775 instead of Pentium D - it probably would've fared better.
 

kool kitty89

Junior Member
Jun 25, 2012
15
0
0
I'm going to say Pentium D and Bulldozer although Bulldozer actually could be decent on a good 28nm or low process, provided that they used lower latency L2 and L3 cache (both about 1/5 the latency of what it is now with the L3 cache maybe boosted in clock speed to about 2.6GHz) and kept non-turbo clock speeds at 3 GHz. Perhaps they could make turbo mode one module disabled with the rest at 3.3 GHz.
It also might be interesting to see the release of a single-core (single module -dual cluster) model. Assuming the power consumption and yields scaled back proportionally (1/2 of the 2-module/"4-core" part), it might fare rather well at attaining higher clock rates with standard cooling and sane power consumption.
Plus, the issue of scheduling threads for clusters vs cores would be avoided. (so better for many typical threaded applications as well as non-threaded)
 

kool kitty89

Junior Member
Jun 25, 2012
15
0
0
I saw a release benchmark from Tom's Hardware yesterday about Prescott, actually. It wasn't "awful" compared to Northwood as some people seem to think. From what I gathered, it generally raised HT performance, lost out on about 1/3 of benchmarks, won about 1/3 of benchmarks, and tied in the rest with an equivalently clocked Northwood.
This is the impression I got too. However, the problem remains that the Prescott runs significantly hotter under load at similar clock speeds (and performance levels), so the per-watt performance fell off significantly. The larger caches helped offset the longer pipeline and higher cache latency over the Northwood.

Again, it makes me wonder whether a more directly Northwood based P4 on 90 nm process would have fared better. Assuming they did actually run cooler, you'd have more power efficient chip that would be more practical at higher clock speeds in spite of the shorter pipeline. (still much longer than any contemporary designs too)

Plus, in terms of actual silicon cost and yields, a 90 nm die-shrunk version of the Northwood Pentium Extreme edition should have been very close to the Prescott (1MB) die size while having 2 MB of L2 cache. A straight die-shrunk 512k L2 Northwood should have been considerably smaller still.

The Prescott core seems to have bulked up quite a bit over the Northwood. SRAM is very dense and takes up a relatively small chunk of die space per transistor compared to logic gates, so the increased cache size would have had relatively little impact on that increase (hence the small increase from 1M to 2M Prescott -or various other examples where cache is the only major change in a CPU, K6-II vs III, Celeron vs Pentium -of same core arch-, etc). It's almost certainly added logic in the Prescott that makes it as complex and power hungry as it is. A straight die-shrink of the Northwood should have been around 65~70 mm2.
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
234
106
Some very good points above. Thanks guys. Here is more food for thought:
The original successor to the Pentium 4 was (codenamed) Tejas, which was scheduled for an early-mid-2005 release. However, it was cancelled a few months after the release of Prescott due to extremely high TDPs (a 2.8 GHz Tejas emitted 150 W of heat, compared to around 80 W for a Northwood of the same speed, and 100 W for a comparably clocked Prescott) and development on the NetBurst microarchitecture as a whole ceased, with the exception of the dual-core Pentium D and Pentium Extreme Edition and the Cedar Mill-based Pentium 4 HT.
The excessive heat and power consumption had been main issues right from the start. Fact. Why did Intel take so long to realize, that parallelism was the way to go?
 
Last edited:

kool kitty89

Junior Member
Jun 25, 2012
15
0
0
The Athlon XP 3200+ had a very generous PR rating. The 3.2 GHz Northwood beat it almost all of the time.
The Athlon 64 3200+ had an accurate rating. It was, overall, fairly even with a 3.2 GHz Prescott.
Is this true for both the 2333 MHz and 2200 MHz variants? The 2333 would obviously be more I/O bottlenecked, but against 800 MT/s P4s, I/O bound applications are probably going to favor the P4 by a wide margin in either case while computationally intensive apps might fare better on the 2333 MHz Athlon. (aside from unlocking it and running it at 11x212 for a 424MT bus . . . and actual core overclocking aside, obviously)

Stranger might be the change in the rating on the comparable Sempron with 3300+ for the 2200 MHz Thorton.







Some very good points above. Thanks guys. Here is more food for thought:

The excessive heat and power consumption had been main issues right from the start. Fact. Why did Intel take so long to realize, that parallelism was the way to go?
Or, that if they did want to continue pushing Netburst at all, perhaps take a step back to the pre-prescott design and move forward with that, single and/or multi-core. (though the enhanced P6 derivatives obviously had lots of promise by that point, even as single-core)

The quote on the Tejas points to the Northwood=>Prescott per-clock TDP increase trend going even further. I don't really understand why Intel (apparently) pushed engineering with complete disregard for power consumption. I don't buy that the Prescott's TDP issues were simply due to thermal density increase with the die shrink. It makes far more sense that it was the direct result of changes made from Nortwood to Prescott.

And on the added logic gates and die-size comment I made above, I should also have mentioned that die shrunk chips (or smaller process versions of chips in general) tend to consume little to no lesser power per-clock when run at the same voltages. (a good broad example of this is comparing various old 5V CPU models -808x, 286, 386, 486, 68k, Z80, 6502, etc, comparing similar model parts at similar speeds, of course, and CMOS to CMOS -NMOS parts are another story ;))

The smaller transistors themselves won't be much/any more power efficient by themselves, but they will often run at lower voltages than larger circuits (and at some point, MUST run at lower voltages or risk burning out), and that's where the power savings come from.
Likewise, if you compare newer revisions of the same family of chips, you sometimes see simple die-shrinks and sometimes see other additions to the hardware. The Pentium MMX added a fair bit over the Classic and ran cooler due to the drop to 2.8Vcore, but if you bumped it up to the Classic's 3.3V, it would run a fair bit hotter than a classic at the same clock speed. The MMX also mainly added 16k more cache, and while that increased the transistor count considerably, it still didn't increase power consumption or die-size proportionally to the % increase of transistors. (again, SRAM packs transistors close and uses considerably less power than complex logic circuits of similar transistor counts -the changes from Willamette to Northwood are somewhat comparable here too, and this is also why the contemporary celerons didn't run all that much cooler in spite of the lower gate count)





Had Intel focused more on optimizing for lower power consumption on post-Northwood Netburst parts, things might have ended up different. I doubt they would have ended up good enough to surpass the Core chips that took over, but if the clock scaling gap remained far enough ahead and power consumption at least somewhat reasonable, Netburst might have had a longer life in the high-end performance niche. Later/smaller more Northwood/gallatin-like chips may very well have left the performance/watt gap much smaller than Prescott/Pressler vs Core2, and much closer to AMD's contemporary chips. (remember the 130 nm Athlon 64 didn't run too much cooler than a Northwood of similar performance -the gap to the Athlon XP was a bit wider though)
If they also could have managed to improve per-clock performance with Nethburst as well, without sacrificing power efficiency or clock stability (beyond per-clock gains, obviously), the architecture might have survived to this day.

I realize I'm playing devil's advocate a bit with the Pro/con Netburst arguments, but I'm trying to look at the situation from multiple points of view.
 

pantsaregood

Senior member
Feb 13, 2011
993
37
91
The 2333 MHz 3200+ was probably rated more fairly. It still would've had trouble with the 3.2C, but not nearly as much as the standard 3200+.

I didn't mention the 2.33 GHz model because it was excessively rare.

Prescott offered lower performance per watt than Northwood, but at this point in time is probably a more powerful CPU. During that period in computing, power consumption wasn't a very important metric for most people. Prior to Netburst, most CPUs weren't terribly power hungry. During the life of Netburst, it was actively acknowledged that power consumption needed to be reasonable.
 
Last edited:

moonbogg

Lifer
Jan 8, 2011
10,731
3,440
136
I should add my 486DX2 pretty much sucked because it couldn't play duke nukem without lagging on that one part where you blow the hole in the wall after shooting that fish alien with your 5 barrel machine gun.
 

kool kitty89

Junior Member
Jun 25, 2012
15
0
0
Prescott offered lower performance per watt than Northwood, but at this point in time is probably a more powerful CPU.
True, and an interesting academic point, but not so important from a historical standpoint, but during the active market life of the chip, that was a non-issue.

Plus, if lower-power Netburst based parts did eventuate, multi-core chips would have addressed the growing multithreading software market too, while higher clocked (but cooler running) and/or larger cached (a la Gallatin) would have their own sets of performance (and price/performance) advantages.
Adding SSE3 support shouldn't have been mutually exclusive with that either. (though the specific hardware implementation may have changed if dicated by low-power requirements)

During that period in computing, power consumption wasn't a very important metric for most people. Prior to Netburst, most CPUs weren't terribly power hungry. During the life of Netburst, it was actively acknowledged that power consumption needed to be reasonable.
I wouldn't be so sure of that. Power consumption is always an issue, but obviously more to some than others. Aside from mobile parts, there's the issue of cooling and the power supply itself . . . for home/dealer-built users, there's several significant issues there with increased likelihood of needing a PSU upgrade (and a more costly one at that) and/or cooling system upgrade (or case upgrade if it ends up providing insufficient ventilation), so lots of hidden costs there.

Way back in the late 90s this was already a significant issue addressed by some reviewers:
http://www.realworldtech.com/altcpu/subpages/cpumainboard/pr233.htm
(pointing out the massive, for the time, power consumption of the 3.2V K6-233 and Pentium 233 -the 266 and 300 were obviously even worse, comparing the 350 nm 2.8V parts, of course)
And in the context of Socket 7 boards, there was the additional issue of weak voltage regulators in some cases (nor rated for such high current or with undersized heatsink). That's the same issue that several Cyrix CPUs were criticized for. (both the early 3.52V 6x86 -very hot running for 1996, and the late gen 2.9V MII chips -which, in reality were still no worse than the old K6-233, but were far more power hungry than contemporary 2.2/2.0V K6-2 and Celeron/PII parts -short of the 450+ MHz chips)



Plus, there's the very real performance issues as well . . . if a 90 nm Northwood-like chip was indeed much cooler than Prescott (or 130 nm NW), it very well may have scaled to 4+ GHz without serious problems. (as it was, 130 nm parts hit a wall around 4 GHz in overclocks . . . Prescott parts fared better in stability, but heat made that point largely moot.
 

pantsaregood

Senior member
Feb 13, 2011
993
37
91
Obviously I can't be sure about it, but I doubt a 90nm Willamette/Northwood would've produced less heat than Prescott. It would've used fewer transistors, but thermal leakage would've still been an issue - that's what's happening with Ivy Bridge today.

A lot of people don't acknowledge the overclocking overhead Prescott provided, either. I'd assume most Prescotts could break 4.5 GHz with relative ease if equipped with decent cooling. My grandparents have a Celeron D Cedar Mill (a direct die-shrink of Prescott) that runs at 4.33 GHz with only a slight increase in voltage. It would easily run faster, but it's only using a stock cooler that won't mount flat.

Another important point that almost no one recognizes is that Intel actually lowered the TDP of Prescott throughout its life. Prescott 550 was a 115W part - the 650 was only 84W. Later steppings of Prescott also had heat under control a bit better.

Also, what I was implying as far as lack of concern with power consumption was limited to the desktop market. Power consumption was only an issue as far as cooling was concerned. The original Athlon Thunderbird and Pentium III 1.13 GHz are great examples of this - the general approach was to strap a bigger heatsink onto the CPU and keep ramping up clocks.

CPUs generally don't run near their thermal limits anymore. Every Sandy Bridge I've seen idles in the mid-30s and hits about 50 degrees load unless it has increased voltage. TJMax for Sandy Bridge is apparently 98 Celsius. TCase (straight form Intel) is 72.6 C. "Strap a bigger heatsink on" certainly has a lot of room to work on modern architectures, but they're actually improving in efficiency now.

All of this said, I don't think Intel intends to get Hammer'd by AMD again. Lack of foresight on thermal issues gave AMD an opening. Judging by this thread, that opening left Intel's reputation more damaged than anything else they've done. I would've placed the 1.13 GHz Coppermine higher on the fail list, personally.
 
Last edited:

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
Obviously I can't be sure about it, but I doubt a 90nm Willamette/Northwood would've produced less heat than Prescott. It would've used fewer transistors, but thermal leakage would've still been an issue - that's what's happening with Ivy Bridge today.
Don't confuse heat output with temperature. Ivy Bridge generates less heat than Sandy Bridge, it just generates less heat in a significantly lesser area.
 

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
Some very good points above. Thanks guys. Here is more food for thought:

The excessive heat and power consumption had been main issues right from the start. Fact. Why did Intel take so long to realize, that parallelism was the way to go?

Only a personal bias against Intel in general or Netburst in particular would lead someone to assume it took Intel "so long to realize" that Conroe and its successors was the way forward.

Intel very likely realized early on that Netburst wouldn't scale to the clock speeds necessary to compete in performance. Why else would they have made Banias for laptops? They made Banias their plan B.

They continued with Netburst for as long as they did to recoup some of their investment in it. It also takes ~5 years to go from design to boxed CPU, so they had to sell something in the meantime.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
iAXP 432 predates me! Looking at its history, it's a tough decision between it and IA64. OTOH, it's failings were anything but unique (well, bit-aligned VLE instructions, maybe!). Then, OTOH (Nationwide commercial in ATCU?), it spurred the creation of the i960, which Intel seemed to want to kill off for years, before they were able to. Not only that, but it was an abject failure. No one latched on and really tried to make it successful. Itanium partly deserves its bad rep for killing off CPUs that would have generally been superior.

The 286 helped solidify the PC. The 386 finally gave some modern features, but the 286 was good enough to keep SMBs from spending a mint on RISCs. Yeah, yeah, yeah, memory segments and all. But there were programs to manage that for you. I executed everything from a shell program that set up memory before executing the application. I don't regret our progress, but it wasn't bad for the time. Its superior competition was all really expensive.

The PPC 970 might not have been the best desktop CPU, being largely a cut-down Power, but it wasn't bad at all, and performed nicely in high-throughput work (such as content creation), remaining competitive with the A64 until it neared EOL. Dervatives of it made their way into embedded products, too.

AMD 10h showed that the K8 still had some life left in it. There were launch issues with the first gen, but that's life. The Athlon II and Phenom II were basically the same CPU(s), and were quite good.

The Motorola PPCs were pretty awesome, actually. They weren't performance-competitive with x86, generally, but they were nice CPUs. Their later revisions, which become synonymous with Apple's G3 and G4, were rather small and efficient, and quite good at a wide array of workloads, provided top of the line performance was not among your needs. Much like the PPC 970 "G5", they get a bad rep more for Apple trying to use them to be different, and lying about performance numbers to make them look worth the Apple tax.

The K5 did suck, but so did many other x86 compatible CPUs. It got AMD in the door, just in time for consumer RISC to die, so it served a good purpose, even if it wasn't a CPU you wanted in your PC. The others ended up getting out x86, and some even got out of CPUs. If the K5 is looked at as an isolated CPU, it is certainly worthy.

I had one of the 6x86 Cyrix CPUs. I managed to miss out of some of the instability fun others had with them (decent PSU and mobo?), but slower Pentiums could at least run Quake. Performance rating my ass. I'd have been better off with any number of other options. Its K6 replacement rocked hard (its FPU might have been lacking, bu between being decent, and twitch games supporting 3DNow!, it was great, and I was able to dump my cost savings v. Intel into RAM and a Voodoo2).

and most people here probably won't remember (unlike most sane children i spent my 8 year old years reading pc mags like byte) the 80186 was an actual cpu that only a few companies adopted...
Intel kept producing it until '06 or '07. You can still buy them. They didn't get too much use in PC clones because they weren't fully backwards-compatible. They are still used as embedded CPUs, though.

Pentium 4 in the history of microprocessors.
At the start, RDRAM cost too much, that wasn't really Intel's fault, and we would be better off today if the RAM companies hadn't screwed everyone over just over a small royalty cost. Willamettes paired with i850E chipsets and speedy RDRAM were the kings of the hill...but you could get the >80% the performance from AMD for <1/3 the cost, and >70% the performance from Intel's own Pentium III for similar money. Paying less for a slower Pentium 4, and/or worse RAM, gave you worse performance than going with current-gen AMD or last-gen Intel. In some cases, that's even being nice, as CPI-limited applications favored the other CPUs at any speed and bandwidth. Later on, Bartons tended to be excellent all-around gaming and CAD CPUs, even in the face of >3GHz P4 C and E chips, given the lower CPI. The P4s generally remained too mediocre throughout their lives, except for the fastest ones at certain times.

Northwoods paired with dual-channel DDR were quite nice. The only real issue there was that you could get the same performance for less money with AMD. When Intel's were the fastest, they cost more than their performance benefit (OTOH, a P4C 2.8+ w/ 865P or better was damn fast at the time, and HT could make up for the few single application performance losses).

The Pentium D, meanwhile, could actually be worse than a P4 C/E at multitasking, which is why you should want a dually to begin with! They were OK with OCs, but at stock, there was good reason AMD could charge what they did for the A64 X2.

Then, the Celerons prior to Prescott sucked the big one. Just like the cacheless Celerons, with benchmark suites not trying to simulate actual usage, they looked way better than they were in practice. But, the later P4 Celerons were actually alright.

The P4s performed, sold, and worked. None of them will make it to a list of most loved CPUs, but worst? Nah. They even had some real bright spots, like content creation. They weren't all that terrible, they were just never amazing. Remember: when they were losing to AMD, that still left them as the 2nd best general-purpose CPUs in the world.

all celerons.
<=500MHz Celerons with cache were good.
100MHz FSB Celerons were good.
Prescott Celerons were OK.
Core Celerons were pretty good.
Core 2 and Core i Celerons have not been shabby.

Cacheless and 128k P4 ones sucked. Faster 66MHz FSB P3 were not very good for the money.

@IDC. Well certainly individuals matter. If only Motorola had seen where PCs were heading and had done anything to get the PC deal. I still think millions of people would have been grateful to have had a 68K PC and never having had to deal with a x86 CPU.
Both Motorolas make me scratch my head, these days. At some point during the time they had those CPUs, they started to veer off. It wasn't until the 2000s that they started to show major problems, but it looks like it started back in the early 90s, maybe even late 80s. I don't think we'd have 68K PCs, but further 68K and non-Power-derived PPC development would have been nice. Losing money while selling very popular phones, and a radio company divesting itself of its semiconductor wing, though, were just :eek:, to me.

http://www.engadget.com/2008/03/26/motorola-insider-tells-all-about-the-fall-of-a-technology-icon/

Never mind the P60 and P4 being missing, what about x86 in the first place. 64KB segmented memory, lack of registers, 640KB, DOS memory managers etc.

All of which can be squarely blamed on IBM choosing the useless Intel 8086 back in 1981. I want to vote 8086/8088.
The problem is that it worked, though. Oh, no, its ugly and hard to work with! But, it worked. It worked well. It still works well. That's what matters.

Please.

The 486SLC should be higher on the list of worst CPU ever than a Prescott.

How many people bought a pre-built system with a 486SLC thinking it was the same performance as an intel 486DX just at a lower cost by not buying the intel brand name.
Hopefully only the gullible. It was a souped-up 386SX clone. D was the important letter, back then. If they had named it 486DLC, when it was actually an SX type, then they'd deserve some ire. The 486DLC actually wasn't bad, considering the cost.

pentium 4, with rambus memory..... seriously intel?
More like, "seriously, Micron, Samsung, et al?"

RDRAM's technical problems were getting taken care of, and today, we could really stand to use less pins. If it weren't for the memory company price fixing, RDRAM would have fallen in price, AMD would have supported it, and we wouldn't be needing 200+ traces to get decent bandwidth for several cores.
 

kool kitty89

Junior Member
Jun 25, 2012
15
0
0
Don't confuse heat output with temperature. Ivy Bridge generates less heat than Sandy Bridge, it just generates less heat in a significantly lesser area.
Isn't there also an issue with increased heat/power dissipation too? I haven't read much further into this particular case, but it still seems to not necessarily be a thermal leakage issue.

Intel apparently claims it to be a thermal density problem, but it seems to actually be caused by intel using cheaper TIM on the heatspreader rather than solder.
http://www.techpowerup.com/165882/TIM-is-Behind-Ivy-Bridge-Temperatures-After-All.html
http://www.techpowerup.com/forums/showthread.php?t=165179

If Intel is making false excuses about Ivy bridge over an apparent botched cost-cutting decision, isn't it possible that Intel's claims of thermal leakage problems on the Prescott may also be skewed?


Obviously I can't be sure about it, but I doubt a 90nm Willamette/Northwood would've produced less heat than Prescott. It would've used fewer transistors, but thermal leakage would've still been an issue - that's what's happening with Ivy Bridge today.
If thermal leakage from the die shrink was the main cause for the increase in power consumption per-clock, then why didn't any contemporary 130 to 90 nm die shinks suffer from similar issues, and why did the 65 nm die shrunk prescott run significnatly cooler as well?
We don't have a 130 nm Prescott or 90 nm Northwood to compare though, so it's somewhat though to say, but it's possible a 130 nm prescott would have run even hotter than the 90 nm one.

A lot of people don't acknowledge the overclocking overhead Prescott provided, either. I'd assume most Prescotts could break 4.5 GHz with relative ease if equipped with decent cooling. My grandparents have a Celeron D Cedar Mill (a direct die-shrink of Prescott) that runs at 4.33 GHz with only a slight increase in voltage. It would easily run faster, but it's only using a stock cooler that won't mount flat.
I didn't address this specifically, but I implied it somewhat by my comments on the die-shrink to 90 nm alone potentially bring the Northwood into that range too.

All of this said, I don't think Intel intends to get Hammer'd by AMD again. Lack of foresight on thermal issues gave AMD an opening. Judging by this thread, that opening left Intel's reputation more damaged than anything else they've done. I would've placed the 1.13 GHz Coppermine higher on the fail list, personally.
Even if this does leave AMD playing second fiddle the vast majority of the time, it certainly doesn't imply they'll be going away any time soon. AMD was successful for a very long time in the x86 market without being a market leader and even managed to pull through a relative slump in the K5 years. (and just prior to that, the lag in the K5's release where they hung on selling 486x5-133 "5x86" parts)
And with a much stronger mainstream brand name at this point (for OEMs and consumers alike), and a somewhat better tech-educated average consumer, I also doubt it will even fall back to their position in the K6/pre-athlon days.

Also, it wasn't just thermal issues that gave AMD an opening. Prior to the Pentium 4 losing out to the Athlon 64, there was the Athlon vs PIII. (and in-between, those, the P4 and Athlon XP virtually neck and neck in peak performance -though not price/performance)
 

kool kitty89

Junior Member
Jun 25, 2012
15
0
0
The 286 helped solidify the PC. The 386 finally gave some modern features, but the 286 was good enough to keep SMBs from spending a mint on RISCs. Yeah, yeah, yeah, memory segments and all. But there were programs to manage that for you. I executed everything from a shell program that set up memory before executing the application. I don't regret our progress, but it wasn't bad for the time. Its superior competition was all really expensive.
Not to mention the 286's faults in the odd protected mode implementation for quite some time. Not until use of extended memory via protected mode became common.

On top of that, there's the very real per-clock advantage the 286 had over the 386 for certain 8 and 16-bit operations. And with the vast majority of commonly used software in the 286/386's lifetime being written for 808x/286, 286 systems often outperformed 386SX systems (and sometimes DX) of similar clock speeds, assuming similar RAM speeds, no external cache, and similar peripheral cards. (since no 286 boards supported cache or better than 16-bit ISA)

AMD 10h showed that the K8 still had some life left in it. There were launch issues with the first gen, but that's life. The Athlon II and Phenom II were basically the same CPU(s), and were quite good.
And still makes up AMD's main product line today. ;)

The Motorola PPCs were pretty awesome, actually. They weren't performance-competitive with x86, generally, but they were nice CPUs. Their later revisions, which become synonymous with Apple's G3 and G4, were rather small and efficient, and quite good at a wide array of workloads, provided top of the line performance was not among your needs. Much like the PPC 970 "G5", they get a bad rep more for Apple trying to use them to be different, and lying about performance numbers to make them look worth the Apple tax.
In the early days, the faster PPC chips did actually outperform the best x86 parts out there. IIRC, even the faster grades of the "low-end" 603 met or beat contemporary early Pentium 1s with similar set-ups.

Cacheless and 128k P4 ones sucked. Faster 66MHz FSB P3 were not very good for the money.
The cacheless ones were still good for gaming. especially if overclocked (sort of the polar opposite to contemporary Cyrix chips ;)), and the late 66 MHz FSB chips were still somewhat worthwhile if you overclocked them . . . and they generally overclocked quite well. Albeit, with the Duron on the scene, the value was still poor by comparison unless you were upgrading from an older S370 part and your board supported coppermine chips.

Hopefully only the gullible. It was a souped-up 386SX clone. D was the important letter, back then. If they had named it 486DLC, when it was actually an SX type, then they'd deserve some ire. The 486DLC actually wasn't bad, considering the cost.
Had the Cyrix SLC acctually been used in any popular decently priced/value embedded systems or low-end PCs/boards (particularly late-gen 386SX boards with external cache) it might have been a different story, but it ended up most notably being used in cut-rate pre-built boxes and the rare niche clip-on upgrade chips.

The DLC was certainly better though, useful as an upgrade for socketed 386 boards compatible with it as well as a good low cost CPU to go along with a motherboard upgrade (at a lower price than 486 boards) and available up to 40 MHz. Similar performance to 486SX parts at a price closer to 386DX-40 based systems.

RDRAM's technical problems were getting taken care of, and today, we could really stand to use less pins. If it weren't for the memory company price fixing, RDRAM would have fallen in price, AMD would have supported it, and we wouldn't be needing 200+ traces to get decent bandwidth for several cores.
I'm not so sure about the traces issue. One of the big problems with RDRAM was that, in spite of the narrower bus, the modules still used a similar number of pins/traces as SDRAM modules of 4x the width, and performance wasn't much better than top-end single-channel SDR or DDR unless dual channel RDRAM was used. (not to mention the severe yield issues early on -and moderate ones later on- and latency, power dissipation, degraded performance with greater numbers of modules, etc)

On top of that, you had RDRAM being first introduced to mainstream PCs on a platform too bottlenecked to even make use of the full bandwidth of single-channel 16-bit RDRAM of the time. The 133 MHz FSB PIII was a rather poor match for RDRAM at the time . . . though ironically the Athlon would have made better use of it. (but they were stuck with PC100 and PC133 SDRAM on a 200 MT/s bus prior to DDR . . . never implemented dual-channel SDR board either AFIK, that would have been interesting ;))
 

pantsaregood

Senior member
Feb 13, 2011
993
37
91
The Athlon was capable of clocking better than the PIII. AMD did catch Intel off guard with K7, and the botched 1.13 GHz Coppermine gave AMD a decent opening. AMD actually overtook Intel in retail sales for a while with K8. I can't see Intel letting that happen again any time soon.

I don't understand Bulldozer. A refined K10 design could've potentially been great. A Phenom III X8? Phenom II with SSE 4.1, 4.2, AVX, and all of the minor core tweaks from Llano could've potentially been quite competitive. If the 32nm node shrink could increase clocks nearly as much as the 45nm node shrink to Phenom II did, a 32nm K10 could realistically hit 4.5 GHz stock.

Then again, maybe there was some bizarre issue AMD saw coming when Thuban was developed. I thought Thuban was a step in the right direction. It could've likely benefitted from some extra L3 cache, but otherwise it was great.

As strange as it may sound, there may have been chipsets that supported dual channel RAM for the original Slot A Athlon. The ALi Aladdin 7 chipset on Super Socket 7 supported dual-channel RAM, after all.
 
Last edited:

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
234
106
I don't understand Bulldozer. A refined K10 design could've potentially been great. A Phenom III X8? Phenom II with SSE 4.1, 4.2, AVX, and all of the minor core tweaks from Llano could've potentially been quite competitive. If the 32nm node shrink could increase clocks nearly as much as the 45nm node shrink to Phenom II did, a 32nm K10 could realistically hit 4.5 GHz stock.

Then again, maybe there was some bizarre issue AMD saw coming when Thuban was developed. I thought Thuban was a step in the right direction. It could've likely benefitted from some extra L3 cache, but otherwise it was great.
Agreed. They had a competitive platform and Llano proved it, with better IPC. Not sure, what exactly happened.
 

pantsaregood

Senior member
Feb 13, 2011
993
37
91
http://www.tomshardware.com/reviews/processor-architecture-benchmark,2974.html

Just for reference. Notice that in many cases, K10 isn't horribly behind Nehalem or Sandy Bridge. This also makes it incredibly apparent that K10 is nothing more than K8 with L3 cache and an updated memory controller.

That would've been fine, though. I wish they'd update this with Bulldozer. It would then be more readily apparent what a trainwreck it is, even compared to Pentium 4.

Pentium 4 was competitive until about 1/4 of the way through Athlon 64's lifespan. Bulldozer came out of the door a generation and a half behind. The architecture is just inefficient. The die size is smaller than that of Thuban, but then you realize how unimpressive that is when you recall there are only four FPUs on a Bulldozer unit.

The module design may be a brilliant alternative to Hyper-Threading. I don't feel that it can fairly be judged right now because Bulldozer cores are just weak. Shared resources aren't Bulldozer's problem, as some people would have you believe. If Sandy Bridge had the same integer unit/FPU ratio as Bulldozer, it would likely make much better gains from it.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I'm not so sure about the traces issue. One of the big problems with RDRAM was that, in spite of the narrower bus, the modules still used a similar number of pins/traces as SDRAM modules of 4x the width, and performance wasn't much better than top-end single-channel SDR or DDR unless dual channel RDRAM was used. (not to mention the severe yield issues early on -and moderate ones later on- and latency, power dissipation, degraded performance with greater numbers of modules, etc)
IIRC, RDRAM used about half the traces per channel (data), and it was equivalent per channel. Power, yeilds, and latency got better over time. 1066MHz 32ns RDRAM was no slouch...but who would buy it?
 

crazymonkeyzero

Senior member
Feb 25, 2012
363
0
0
worst cpu, relative to it's time, would be bulldozer...period. Pentium 4 was trumped by the original athlon 64s back in the days, but I doubt the gap between intel and amd has ever been as great as it is today. But I hope AMD will get their @$$ in gear and get things right with pile driver. :\
 

Kristijonas

Senior member
Jun 11, 2011
859
4
76
Could someone tell me why is Pentium 4 so criticized? I think in P4 days I still had my Pentium MMX for several years and didn't know anything about computers as I was around 10.
 

inf64

Diamond Member
Mar 11, 2011
3,884
4,692
136
worst cpu, relative to it's time, would be bulldozer...period. Pentium 4 was trumped by the original athlon 64s back in the days, but I doubt the gap between intel and amd has ever been as great as it is today. But I hope AMD will get their @$$ in gear and get things right with pile driver. :\

Ok again,some facts instead of fiction. First of all,do you consider a 980x Westmere to be a fast modern multicore chip that does good in both single and MT workloads? I bet you do. Westmere is still a fast desktop chip. Has 6 cores/12T and has a high Turbo clock @ stock.

Now let's see how FX8150 @ stock compares to Westmere 980x @ stock.
Numbers can be found here. This is what they used in their tests:
Single-Threaded:
Adobe Acrobat
WinZip
iTunes
Lame


Multi-Threaded:
3ds Max
Blender
HandBrake
MainConcept
After Effects
Photoshop
Premiere
Matlab
7-Zip
Single-Threaded Efficiency Run:
980x - 9:32 or 572s
FX8150-11:13 or 673s

980x is : 572/673=0.85 or 15% faster in pure single thread workloads (all time values added up;results are total run time and lower is better naturally).
So 15% higher single thread performance of Westmere is a "huge performance gap" somehow? Didn't think so.

Move on to MT runtime.
Multi-Threaded Efficiency Run:
980x - 13:49 or 829s
FX8150-16:31 or 991s

980x is : 829/991=0.836 or 16.4% faster in multi-threaded workloads.
We have a "fat" 6 core chip that supports SMT(12T) versus a "slim" 8 core chip with only 4 FP units(8T) that should be weak in single thread workloads and somewhat strong in MT workloads. What we get is 15% higher single thread performance and 16.4% higher MT performance for "fat" core. FX is not that slow after all.At least if you compare it to Westmere.
 

pantsaregood

Senior member
Feb 13, 2011
993
37
91
I want to see a comparison like that of a 3960X and FX-8170, both at 4.0 GHz. Turbo off. Not because I don't believe those numbers, but because I'm curious.