If desktop CPUs are already fast enough, why doesn't AMD CPUs sell?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tipoo

Senior member
Oct 4, 2012
245
7
81
Just because something is "good enough", why pick the inferior one for the same price as a better one, especially when the inferior one draws twice the power getting there? For Joe and Jane consumer they probably don't even think that far, Intel is pretty well known while mostly only enthusiasts know what AMD does.
 

riva2model64

Member
Dec 13, 2012
47
1
71
Just putting things into perspective, say you have 2 identical machines, except one uses 30w more than the other. Given an arbitrary price of 9 cents per kwh and assuming 24/7 use, the price differential in the power bill will be $23 per year, or $70 after three years.

On a side note, I could be wrong, but I'm willing to wager that FX-6100s/6300s will be a bit faster than the dual-core Core i3s that are being recommended over them in a year or two (because of increasing number of apps being coded multi-threaded).
 
Last edited:

CHADBOGA

Platinum Member
Mar 31, 2009
2,135
833
136
The market has slowed down and that's why AMD didn't order wafers from GloFo.
Yes yes, the market goes down 1.2% and AMD cuts 75% of its purchase commitments. :whiste:

The misinformation at times is just astounding. D:

The current situation is a testament to the importance of IPC and performance/watt.
 

lakedude

Platinum Member
Mar 14, 2009
2,778
529
126
Yes that looks very attractive. I am sure you saw Jared Walton's preview on that, and that he will come out with a full test. I would wait for that to see if you are severely CPU limited in more demanding games.
If you are gpu limited you can turn down some settings, but if you are CPU limited there is not much you can do.

EDIT, AHH I finds the video...

http://www.anandtech.com/show/6517/amds-radeon-hd-7970m-ivy-bridge-vs-trinity-video

EDIT Hmmm A10 does seem to be a bottleneck, drat!

That laptop has only five star reviews on Newegg (8) and Amazon (2). Everyone that has one seems to like it...

Finally even though I'd rather have an i7, it would be nice to throw some money AMD's way...
 
Last edited:

lakedude

Platinum Member
Mar 14, 2009
2,778
529
126
Getting back to the original topic...

All recent desktop purchases have been Intel (IB i7, 2 generations of i5, 2x C2D, etc.), because performance mattered. Those systems all started out as primary gamers and then see use as distributed computing crunch boxes. They all run 24/7 so power consumption is an issue.

AMD powers one of our laptops. That laptop is only used to surf and play videos. The choice to buy AMD was made purely on cost. It is plenty "fast enough" for the type of use it sees.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
20120905pcICinsightsChipRD519.jpg

thx IDC. The number speek for themselves, there is good reason to repeat them. The semi conductor industry is a perfect example of natural monopoly. Thats why we see a trend towards consolidation.

The stories about beeing small and agile in this market is imagination, it will never happen, and probably its easier to have better efficiency when you are the big boy. There will be less stress, you can take long-term investments, less management hanging on the specialist shoulder for fast results. Less statistical numbers game for pleasing the shareholders next quarter. The huge organization have benefits besides the very obvious that is of vital importance to.

Its still a mystory to me how Intel landed in the P4 situation, but probably it was wrong marketing influence going for more Mhz that led to the - to long - pipeline.

The only way for AMD is not to compete with Intel. Its simply not possible to compete because of sheer ressources.
 

GreenChile

Member
Sep 4, 2007
190
0
0
Its still a mystory to me how Intel landed in the P4 situation, but probably it was wrong marketing influence going for more Mhz that led to the - to long - pipeline.
The answer is quite simple. Craig Barrett.

He was really out of touch.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Not at all. Before P4 everyone thought that frequency scaling wasnt an issue. And that you could scale to 10Ghz and beyond.

And it wouldn't have become an issue had they kept the chips nice and small.

180nm Willamette Pentium 4 was a mere 42m xtors.

130nm Northwood Pentium 4 increased that to 55m xtors.

90nm Prescott Pentium 4 increased that to an astonishing 125m xtors D:

65nm Cedar Mill Pentium 4 increased that even further to 188m xtors :eek:

(65nm Conroe Core 2 Duo debuted at 291m xtors and substantially lower clockspeeds, but with two cores)

You can't double and triple the xtor count and expect to keep increasing clockspeeds without having a cause for concern at some point.

10GHz is doable, without question, but not if the core contains 1B xtors with a die size of 300mm^2.

Where things went off-track for Intel was they pushed for a microarchitecture that required an F1 racecar frame but they did the microsoft thing where they bloated out the microarchitecture every node in a way that then ballooned the power-consumption footprint.

Putting an SUV and then a school-bus on the race track while trying to push them through the air at 200mph and wondering why the gas mileage was plummeting.

In hindsight you gotta wonder how on earth such glaringly obvious design choice trade-offs weren't foreseen in advance. The 90nm precott team should have never been given a xtor budget that was >2x that of Northwood's design. Whoever made that decision was the guy (or gal) that setup Prescott to fail from the start.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
And it wouldn't have become an issue had they kept the chips nice and small.

180nm Willamette Pentium 4 was a mere 42m xtors.

130nm Northwood Pentium 4 increased that to 55m xtors.

90nm Prescott Pentium 4 increased that to an astonishing 125m xtors D:

65nm Cedar Mill Pentium 4 increased that even further to 188m xtors :eek:

I think its actually because the whole philosophy of increasing pipeline stages to increase clocks work only to a certain point.

Going from 10-12 stage to 20+ stage pipeline in Pentium 4 required...
...dramatic boost in branch prediction accuracy
...addition of things like the trace cache
...double clocked ALU
...some weird things like the replay pipeline

Which all adds transistors. So adding pipeline stages require all the support so the performance loss isn't so dramatic. More transistors mean more delay, means more power use, which cuts back on potential clocks.
 

GreenChile

Member
Sep 4, 2007
190
0
0
In hindsight you gotta wonder how on earth such glaringly obvious design choice trade-offs weren't foreseen in advance. The 90nm precott team should have never been given a xtor budget that was >2x that of Northwood's design. Whoever made that decision was the guy (or gal) that setup Prescott to fail from the start.
Again I stand by my belief that Craig Barrett was responsible. I'm pretty sure the 10GHz thing was his baby. He wanted it to be his legacy. But at the same time he wanted to keep the Moore's Law going which explains the huge transistor increases. Unfortunately his legacy took Intel down the wrong path and that was their darkest hour.

Paul Otellini completely turned the ship around as soon as he took over. Canceled Tejas and redirected resources to the correct path. Makes me a little worried about him leaving. Will Intel end up with another Craig? D:
 
Last edited:

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
You can't double and triple the xtor count and expect to keep increasing clockspeeds without having a cause for concern at some point.

How much transistor budget AMD is going to allocate to the bigger decoder in steamroller and the other measures intended to keep the units fed?
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Paul Otellini completely turned the ship around as soon as he took over. Canceled Tejas and redirected resources to the correct path. Makes me a little worried about him leaving. Will Intel end up with another Craig? D:

I think your concerns are very valid. But I think Craig Barret was necessary for Intel's current success. He was basically a manufacturing guy, and Intel was probably made stronger there because he was CEO.

The hope is of course the next CEO becomes someone that makes less strategic mistakes than Otellini with all the benefits of Otellini. Otellini+ basically. The evolution of Intel goes from 1. Strength in manufacturing 2. Strength in Core markets, and the #3 has to be "Flexibility to expand beyond Core markets". Don't forget the first two can't be neglected.

Of course, nothing says that we won't end with a CEO driving it the other way as well.
 
Last edited:

UaVaj

Golden Member
Nov 16, 2012
1,546
0
76
for the majority of the buying public. it has very little to do with actual performance. it all marketing. majority of buyers will listen to the marketing song before making a purchase decision. get the marketing air tight and even trash will sell.

for the minority (most of us on anandtech). we buy based based on end results benchmarks and end results fps. we could careless if there is an "i" or an "a".
 

Puppies04

Diamond Member
Apr 25, 2011
5,909
17
76
Paul Otellini completely turned the ship around as soon as he took over. Canceled Tejas and redirected resources to the correct path. Makes me a little worried about him leaving. Will Intel end up with another Craig?

I'm sure anything is possible but with Intels tick/tock stategy they seem to be going from strength to strength. With the current focus on performance/watt they really can't do much wrong tbh although sooner or later i'm sure they are going to have to make some tough decisions.
 

Pilum

Member
Aug 27, 2012
182
3
81
Again I stand by my belief that Craig Barrett was responsible. I'm pretty sure the 10GHz thing was his baby. He wanted it to be his legacy. But at the same time he wanted to keep the Moore's Law going which explains the huge transistor increases. Unfortunately his legacy took Intel down the wrong path and that was their darkest hour.

Paul Otellini completely turned the ship around as soon as he took over. Canceled Tejas and redirected resources to the correct path.
That assumption is incorrect. Modern CPU projects take 4-5 years from design start to customer delivery. So Core2 must have been started in 2001/2002, pretty soon after delivery of the Pentium IV, when Intel could see this path wouldn't work out. This was well before Otellini took the CEO position, so it wasn't his decision, but Barretts.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
As i remember the Pentium pro team were the same team as designed p4.

As its the best and brightest that design the P4 and the BD, and they have state of the art tools for predictions, i wonder where it fails so much from those predictions. Both piledriver and northwood was not there. So both from the get go and first rev. it was simply not working. (prescot must have been just to wait for the core arch)

It must be between design and production or what? some fundamental asumptions must be wrong about production capabilities? There must be a lot of knowledge in hindsight here...
 

lamedude

Golden Member
Jan 14, 2011
1,230
68
91
A theory I've read else where was general performance had reached fast enough so it could be sacrificed to improve media performance (P4 in last days could still score victories there). Intel wanted you buy an Itanium if you wanted a faster workstation/server chip so it making slower for those programs was probably a feature.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
"Intel ended a year with a 2.7 percent decline"
http://www.fudzilla.com/home/item/29841-semiconductor-revenue-declined-3-percent-in-2012
and remember... Intel is took share from AMD

I just said that bad market conditions aren't an excuse for AMD poor performance. If you factor 1.2% or 3% it is still orders of magnitude smaller than the 75% decline AMD posted. You cannot get those numbers straight without factoring a *huge* decline in market share and ASP, which is what I said that is effectively killing AMD.

I can understand xbitlabs giving voice to the argument that it is all due to market weakness, as they seem be quite happy with their new role as AMD PR fodder, even twisting executive' statements to paint a rosier picture than AMD executives themselves, but not someone who can read and do simple math.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
That assumption is incorrect. Modern CPU projects take 4-5 years from design start to customer delivery. So Core2 must have been started in 2001/2002, pretty soon after delivery of the Pentium IV, when Intel could see this path wouldn't work out. This was well before Otellini took the CEO position, so it wasn't his decision, but Barretts.

The Core2 line was always on the roadmap as their mobile line - from tualatin to dothan to Conroe. Developed in parallel with a different application in mind.

So starting Core 2 in 2001 was a given, regardless the plans for 65nm Netburst, because Core 2 would have been on the roadmap for the mobile lineup at the time.

How much Core 2 itself was modified as time went on and the plans for it became more grand than merely that of a mobile processor is anyone's guess (outside of a handful of Intel'ers), but Intel made a lot of bluster over their 10GHz goals at IDFs and so forth so I have to beleive that up until Prescott came out Intel was not considering changing horses mid-race at 65nm.
 

Gunbuster

Diamond Member
Oct 9, 1999
6,852
23
81
The thing that really hit it home for me was in a webinar where Intel stated that they spend more for validation than AMD does on R&D.

So yea I'm gonna go with better than "good enough" and as a bonus have a huge helping of my shit is not going to do anything unexpected.