Was AMD's Small Die strategy a huge mistake?

GodisanAtheist

Diamond Member
Nov 16, 2006
8,495
9,925
136
And by huge mistake I mean for AMD. For consumer's AMD's focus on small dies was a god send. It kept NV walking the straight and narrow in terms of pricing (Lol at GTX280/260 launch prices) and set an almost unrealistic expectation for price/performance in the consumer's mind.

However, it comes as no stretch to see how AMD could have really stolen the show had they been just a little more aggressive with their die sizes. They were competing at times with NV dies twice their size and a few more SIMD cores would have had AMD pulling ahead of NV's top end processors all the while having a much smaller footprint, lower power consumption and a lower price tag.

AMD could have charged a higher price for their GPUs (and hence higher margins), maintained ATI's "High end" image instead of looking like the bargain bin value brand, and wouldn't have to face the inevitable "spoiled consumer" backlash that they are facing now with a truly competitive high end GPU.

Hindsight is always 20/20 though, and after the whooping AMD got at the hands of the 8800GTX, the wallowing turd that was the 2900XT and the sale of ATI their engineers/leadership might have just been flat out demoralized. I can't help but look back and think of what a different position AMD might be in today if they charged on ahead with monolithic dies and let Nvidia take on water with their GPGPU strategy. Instead they gave Nvidia the room they needed to create then entrench themselves in the GPGPU market while being competitive in gaming performance.

Your thoughts?
 
Last edited:

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
AMD's strategy was correct, Nvidia copied it with Kepler. In fact Kepler is the no excuses gamers only card, much like AMD was touting with the 5000 series. And if you put the over the top fanboyisms aside, AMD and Nvidia are very very close this generation in terms of performance, AMD draws more power simply because they chose to go more compute centric this round.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
AMD's strategy was correct, Nvidia copied it with Kepler. In fact Kepler is the no excuses gamers only card, much like AMD was touting with the 5000 series. And if you put the over the top fanboyisms aside, AMD and Nvidia are very very close this generation in terms of performance, AMD draws more power simply because they chose to go more compute centric this round.
Nvidia did not copy it with kepler. gk104 was not going to be their top chip so it was pretty stripped down. once they saw that gk104 with higher clocks could roll with the 7970, it become their official top chip. if the 7970 would have been a faster card then we would have no gtx680 at this time.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
Nvidia did not copy it with kepler. gk104 was not going to be their top chip so it was pretty stripped down. once they saw that gk104 with higher clocks could roll with the 7970, it become their official top chip. if the 7970 would have been a faster card then we would have no gtx680 at this time.
This is pure fiction.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
The small die strategy was actually good and has helped AMD.
Their problems are numerous but the small die is one of their successes not one of their failures.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
If the 7970 had been faster, Nvidia would simply have brought a higher clocked GK104. GK110 wasn't and isn't ready. What else would have been their top chip if not GK104?
 

Ventanni

Golden Member
Jul 25, 2011
1,432
142
106
I think it's a little bit naive to think that Nvidia's GK104 GPU was discovered accidentally to be as fast as it is. Neither AMD nor Nvidia design these things with just pencils. They run these chip designs through supercomputers to determine their performance characteristics, so they have a pretty good idea how they're going to perform long before they're ever hit first silicon.

I don't think it's hard to pinpoint why they stripped Kepler like they did. Not only was the Fermi gaming line probably too close in functionality to the Tesla line (duh, same chip), but they also probably realized pretty early on that the majority of the compute-centric functions of Fermi were left unused by gamers. Plus, Nvidia is actually trying to market Tesla in a profitable manner. Why spend $5000 on a Tesla GPU when an off the shelf $400 GPU does the same thing?

So Nvidia played smart this round. They stripped the top of the line chip of its double precision capabilities (it's not completely neutered in the compute department though), which in turn probably saved a good amount of die space. That smaller die size translates into [theoretically] lower production costs, and overall a significant reduction in power consumption.

In the end, it's a big win for us gamers. It's a big win for Nvidia if they can get a grip on 28nm manufacturing. And it's a big win for Nvidia for marketing Tesla as an added value over Kepler, rather than just relabeling the same chip. I think what we're seeing now from both companies is what we'll see for a while given that AMD has to sell their APU line with some significant added value over Intel, and that'll come in the form of added compute capabilities of their GPU addon.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
^^ Well said. And I do think if/when AMD has the resources, they will follow a similar path and do the compute centric parts as high end, the consumer stuff will be more stripped down. Right now AMD is stuck having to be straddle the line, which for me is great I like having the compute capabilities, but I can understand for most people they have no use for it.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
Ventanni, yeah it is a big win for gamers. we get a 30% increase at the same price point from an all new architecture. :rolleyes:

reality is that we have what would have been an upper mid range product and are paying high end prices for it. and to make it worse Nvidia even cheaps out on the reference pcb and coolers especially the 670. even the 460 cooler is much more robust than the reference 670 cooler.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81

There is no proof GK100 ever existed. We can only assume, but to me it is plausible that Nvidia learned from Fermi and pushed back the big chip from the very beginning, because it is - at best - ambitious to make a 500+mm2 die on a new and immature process.

According to a well informed guy I know the name GK110 is not because the chip comes later or is a refresh of a GK110 but instead is attributed to the capabilities of GK110 (that GK10x do not have). Look at the presentations at GTC and the whitepaper, these capabilities were presented there.

Also take into consideration that designing and making a chip costs alot of money. If you can bring it to market, no matter how bad it is, you do it. AMD did it with R600 for example. It is highly unlikely that Nvidia had the resources to waste to just put the hypothetical GK100 into the trash.
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
GK104 is an attempt to turn the best possible profit margin during a time when PC sales are down and PC gaming is stagnating, see AMD for a strategy to lose money.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
If GK100 had existed, they would probably have used it in Tesla models.
There is pretty much zero chance NV would have a design ready to produce and then cancel it.
Considering GK104 came out only about 3~4 months after AMD released their product, GK100 would have had to be close to ready, and cancelling it would have cost a lot of money, as well as lost sales in the HPC/Workstation market through delaying the introduction of a new GPU.

Almost no matter what happens in the desktop arena, NV can always sell a chip for profit in HPC/Workstation markets. If they had one, it would be on sale unless it wasn't able to be released at all.

The vast majority of NVs profits come from HPC/Workstation cards (profits, profits). You don't skip out on that market because your competition is being weak in desktops.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106

GK100 wasn't doable. They can barely do GK104.

More likely, GK104 is the direct result of supply constraints at TSMC.

I'm not sure what any "TSMC supply constraints" have to do with there being no GK100. I think it's more an excuse. Just like the reason we don't have any gtx690's is supposedly because of a shortage of the PLX bridge chips. Strange that Asus, Asrock, and Gigabyte can find them for their motherboards though. :cool:

^^ Well said. And I do think if/when AMD has the resources, they will follow a similar path and do the compute centric parts as high end, the consumer stuff will be more stripped down. Right now AMD is stuck having to be straddle the line, which for me is great I like having the compute capabilities, but I can understand for most people they have no use for it.

The only reason to not have GK110's for consumers is because they won't have enough. Selling the top end chips to consumers helps offset the development and production costs.





I think in the end that AMD can't afford the setbacks that nVidia has suffered through with Fermi, and now Kepler. As is often pointed out, nVidia has cash reserves. this carries them through when these big chips can't be brought to market in a timely manner. I think it would bankrupt AMD.

I personally don't care if AMD doesn't compete with 500^2mm chips. Those aren't the chips I buy. Most people don't buy those products. AMD's main problem is marketing. While they've had a couple of brilliant moves, for the most part they get schooled by their competition. Problem is, they never seem to learn from it.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
8,495
9,925
136
I am a little confused by the route this topic took, but OK. On one hand you have people saying the GK104 is a validation for AMD's small die strategy... but its really not. Its a validation of my original argument: AMD should have released parts competitive with the top end NV chips in performance and reaped the benefits of an architecture unburdened by GPGPU/Compute and all that nonsense. NV's GK104 die is no where near as small proportionally to the HD7970 as the HD4800/5800/6900 series were to their NV counterparts. AMD had SO GOD DAMN MUCH room to play, but shirked away from fighting for the high end.

Besides, the GK104 is the fruition of the GF104, or the GTX460/560/560Ti line. Nvidia was forward thinking right when they went into this GPGPU thing and made their second rung die a pure gaming card (Like the HD7870 and the now legendary 7850). The GTX460 is a fantastic clocker and for its die size can compete with cards otherwise well out of its league. This is basically what the GK104 does. Its more a success of NV's own Halo Compute card, Mainstream Gaming card strategy than it has anything to do with AMD's small die strategy.

Anyhow I have no idea why this thread suddenly turned into a Nvidia thread, but if you guys could at least lay out cogent arguments as to why my original thesis was wrong, I'd appreciate it.
 

nenforcer

Golden Member
Aug 26, 2008
1,782
24
81
Strange that Asus, Asrock, and Gigabyte can find them for their motherboards though. :cool:

Allegedly those bulk purchases were made far in advance and naturally the motherboard makers have quite a bit higher volume for $50-$300 motherboards than nVidia does for $1000 GPU's.

Whenever PLX's next production / manufacturing cycle comes around there should be plenty of PCI-E 3.0 bridge chips for everyone.

It is a bit odd that the entire industry is dependent upon a single vendor but not surprising considering AMD's chipset division (the laziest in the industry) hasn't come out with a new desktop chipset in over 3 years now, just rebranding the AM3 800 series into the AM3+ 900 series.
 

flopper

Senior member
Dec 16, 2005
739
19
76
you need to understand the big picture here.
the market is changing and HSA and such is giving programs and users a new way of speeding up things. small core strategy was and is the way to go.
just look at the complexity of the 680 and yields.
its a mobile market out there, apu etc...smaller stuff saving energy and power, the better one might not make it but the one made for the market.

once you look at dice who were a PC gaming company and now is a console gaming company with bc2 and bf3 and moh2, there is no way of telling who will come out on top or survive. todays hardware is good enough for games so no incitament to buy cards for upgrades for games. it be for other stuff, like photoshop, etc..
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Allegedly those bulk purchases were made far in advance and naturally the motherboard makers have quite a bit higher volume for $50-$300 motherboards than nVidia does for $1000 GPU's.

Whenever PLX's next production / manufacturing cycle comes around there should be plenty of PCI-E 3.0 bridge chips for everyone.

It is a bit odd that the entire industry is dependent upon a single vendor but not surprising considering AMD's chipset division (the laziest in the industry) hasn't come out with a new desktop chipset in over 3 years now, just rebranding the AM3 800 series into the AM3+ 900 series.

The boards with these chips are far dearer than $50. The cheapest is $280. They run to over $400. The assumption that they would need more of them should mean that they would be suffering a greater shortage. They apparently aren't, though. As there is availability of all the boards. Again, this chip shortage is typical that it's someone else's fault nVidia doesn't have product.

What does AMD not updating their mobo chipset have to do with this? :\
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Imho,

Small die/sweet spot strategy was certainly nice -- small die/premium pricing strategy is not.