AMD Q3 results: even worse than revised expectations

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Jun 24, 2012
112
0
0
I just don't get why it's impossible to consider a few possibilities as not mutually exclusive. For example, didn't early nVidia drivers accidentally expose the 680 as a 670 and only later was it renamed once AMD's 79xx series performance levels were revealed? I just don't think the fact that the GK110 (or its successors) being moved to 2013 for potential use in the consumer line (and perhaps further) is proof of anything beyond the fact that nVidia learned their lesson from Fermi.

It's very clear that GK104 is an evolution of the same design mentality that produced the 460 and 560. It's also very clear that the GK110 is an evolution of the same design mentality that produced the 470/480 and 570/580. I don't think nVidia had any intention of releasing the GK110 product to consumers any time this year once they started assessing actual fabrication of said products. I believe they said as much repeatedly. That said, I don't think that was their original intention back in the day and what I think most people would blame AMD for in this regard is not the fact that the nVidia high end wound up as "just" the GK104 product, but that AMD set the bar so low with the Tahiti line that nVidia was able to swing the GK104 as "high end."

I don't think it was designed to be high end. I think it was designed as a successor to the 460/560 line and they were going to release the "mid-range" for this year with them hopeful a dual-GPU card would fill the high end. Instead, AMD made them happy (and they said that, too, at the time of the R7970 launch) with their performance levels being so unimpressive as to help nVidia justify bumping the price and branding up on a card they were about to release as 670 or lower.

That'd explain why nVidia is making more money than AMD currently, btw. nVidia is getting $500 regularly for a GPU that costs considerably less since it was designed initially to be in cards selling for $200-$300. The sheer amount of RAM the default config should help you get that. Meanwhile, AMD's busy producing a larger GPU and charging less for it, probably hitting a point far closer to their cost to manufacture than they'd like.

My point is basically that I think AMD set the bar very low and nVidia was able to capitalize on that gleefully by being able to mark their product up to higher levels than they'd dreamed possible. From what I remember, last year a lot of the rumor sites were saying GK110 was looking late, looking possibly going into 2013, and looked unlikely to have any effect at all on the consumer market. More to the point, I remember having the distinct impression that nVidia was going to try and weather the storm with mid-range products and dual-GPU as the high end. That's exactly what they've done, except instead of "weathering the storm" they've pwned the storm.

This is all a direct result of AMD's failure to produce a card that pushed the high end. nVidia had what they had and their card was designed to satisfy the mid-range, but suddenly they looked like prophets given their performance level and the way it matched AMD performance in the high end. Neither company produced cards that really pushed the boundaries in the way we're used to, so each company focused on their strengths (ironically having swapped strengths since last generation. Ie., Suddenly, AMD cares only about Compute and nVidia cares only about efficiency). But GK110 was too big and complicated to produce reliably on a relatively new 28nm process.

It was meant to be the high end, but nVidia learned from their mistakes in the past (ie., Fermi in particular) and instead of doing what they did with Fermi, delaying and waiting, they planned ahead. They knew they could use the mid-range to keep the boat afloat while waiting for the technology to catch up to GK110's complexity to fabricate. In the meantime, they could use on-card SLI to keep up with AMD when AMD decided to bring it.

Except AMD didn't bring it. They were a wet noodle. Suddenly, nVidia was so smug they couldn't resist saying so in response to AMD performance benchmarks, admitting they thought AMD's performance levels would be higher. That was early this year.

So now, why WOULD nVidia rush to bring out a complex GK110 part to face down a minor performance advantage in a few benchmarks by the 7970GE this close to the next generation of cards? Why when they can still charge $400 for 670 and $500 for 680 versus Radeon cards in their freefall pricing? Clearly, that would be the most stupid thing in the world to do. Especially when GK110 is so complex they'd take a beating trying to keep it reasonable in pricing.

Instead of pushing out cards with bad fab and/or leakage, having to hack off parts of the chip (Fermi) to get them out the door, they can just shrug and say, "It'd cost us an arm and a leg. After Fermi, we need to milk the market for a while." So they'll just use their smaller, cheaper, more focused part until such time as fabrication catches up to the complexity of the GK110. They realized this was what they'd have to do before they saw the Tahiti performance benchmarks. What those benchmarks let them do, though, is realize they could still have high end pricing. Before that, they seem to have intended the GK104 series to be at mid-range pricing with what became the 690 being the high end if they needed it to match whatever AMD produced.

And if you think nVidia would release a GK110--a whole new chip--just to win a couple of benchmarks (the ones they're not already matching or winning) against the poorly named and launched Radeon 7970GE, then I think you really need to reassess. AMD hasn't given them a good reason to push the Gk110 harder to be fabricated. Everything this year has just proven to them they can sit on the GK104 and its successors for the near term because AMD doesn't have anything anywhere close to the GK110 level, so they don't need the Fermi-like tradeoffs of performance for more heat.

I suspect nVidia is realizing wholly by accident what AMD professed several years ago. It is better to have a smaller GPU less generically awesome but razor focused on the core usage model of discrete GPU's than it is to have a Fermi or Tahiti-like chip. nVidia happened upon it by accident when GK110 turned out to be a whale of a chip to fab. That's why they pushed ahead with Gk104 hard and that's why when Tahiti showed up and they were like, "That's it?" you could practically read the palpable joy in the way they said it.

It's like gearing up for the fight of your life, knowing you twisted your ankle on the way to the fight, but arriving to find your fighter is a one-armed man. Suddenly, the fight's in your favor. That's why suddenly all the rumors from last year that had been saying, "GK110 not coming next year, but nVidia mid-range is" turned to, "nVidia mid-range is now being renamed 680 because they can match performance."

Hell, I remember reading forums of people saying, "There's no way nVidia's producing a mid-range card that can match the 7970!" The rumors were that pervasive. And whaddya know? That's exactly what nVidia did, except they figured if they had a card that matched the high end bar set by AMD then they should get the pricing of it, too.

So, I think AMD probably would have been better served depleting any excess inventory they had in the channel (and there were a lot of 68xx, 69xx cards for sale for a long time after the 78xx and 79xx series) with solid pricing and showing up in June with truly refined drivers (12.7-like), higher clocks, and pricing that matched up to slightly higher than what they have now.

RS likes to talk about "First Mover" being a great strategy, but I suspect somewhat lower prices coming with awesome drivers that provided Geforce-beating performance with reviews all saying uncategorically, "Radeons are faster with great power utilization/efficiency and more RAM for lower prices!" would have made the whole line do much better against nVidia than what's happened.

AMD released their cards before they had proper drivers for them. That gained them a reputation for lower performance than they have now, worse power efficiency numbers they really have, a reputation for Crossfire problems that continues to this day, and let nVidia get a few thousand reviews all saying "Kepler is incredibly more efficient than Tahit yet performs the same and costs an equivalent amount! You'd be stupid to buy the hotter card." That legacy continues to haunt the 7970 series. This is part of the reason for the 7970GE launch, but even that was tainted by AMD's baffling decision to pre-launch the card months in advance and to use the worst possible cards with the highest power usage they could find to do that pre-launch. By the time the real cards showed up, those reviews were not indicative of the final product, but good luck finding any reviewer going back to edit their reviews on that fact.

Newer reviews might say otherwise, but few care about those reviews. Most are reading that initial review where the whole card, its technology, its legacy and history, and all its promise are described in minute detail. Remember the "first reviewer advantage" is always to the first few reviews of a given product and what they say about said product. That defines the product and only incredible shifts can change that early perception once a perception has set in.

What's the first thing people remember about Radeon 7970? They launched at high prices for low performance gain? First thing about 680? Great efficiency, much better than R7970 and pushed the Radeon into a freefall of pricing. 670? 95% of 680 for 20% lower prices. R7950? Overpriced until very recently. Etc, etc.

Everything about the Radeon line is tainted by those early reviews using drivers that put these cards in a bad light. If you google these cards, more often than not you get reviews from early this year. Not everyone knows the little things, like 12.7 catalyst drivers being truly awesome and helping the cards push past the Kepler products in many ways. Or that 7970GE cards by the best OEM's are actually awesome in efficiency.

They don't know these things because AMD did such a crappy job of a first impression and nVidia used that crappy impression to make their first impression sterling by comparison. AMD seems to have learned their lesson if their delaying the 8xxx series is any indication.

But just because nVidia doesn't apply the GK110 or its successors as the high end of the consumer division next go-round, that does not prove to me that they NEVER intended it to take on that role. They don't NEED it to take the role. Having the high end crown matters not at all if you're able to charge more for your high end card and sell out of them even while your opponent wins a few more benchmarks and yet has to sell their card for $100 less to lure anyone to buy it. I think they intended to push for GK110 harder, but once it became clear it was unnecessary, they put de-prioritized it. It just wasn't required. Nothing in the consumer market really requires it and nothing AMD's making comes close to making the competition necessary.

So now, nVidia has no motivation to do anything differently. Fermi ate up a lot of profits and why should they want that to be repeated this go-round? They can take their time. This generation has been exceptionally kind to them in every way. They could release their cards as slowly as they liked, they could launch mid-range cards at high end pricing and still be called the winner, keep the channel well stocked and yet sell out dramatically across the board while charging more for less performance than their competition.

And all thanks to some really drab reviews and horrible word of mouth (all those former Radeon users who sold their Radeon's to get 680's and 670's to escape the pre-12.7 crap). Much of this was fueled at least in part by poor launch planning by AMD and a legacy of bad drivers still pungent from late last year (especially at Rage's launch) in the build-up to the Radeon 7970 launch when the driver teams had to focus on the very new and different GCN architecture to the detriment of the older Radeon products. It didn't help that Crossfire was completely broken in their first few drivers for the 7970, either.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
I just don't get why it's impossible to consider a few possibilities as not mutually exclusive. For example, didn't early nVidia drivers accidentally expose the 680 as a 670 and only later was it renamed once AMD's 79xx series performance levels were revealed? I just don't think the fact that the GK110 (or its successors) being moved to 2013 for potential use in the consumer line (and perhaps further) is proof of anything beyond the fact that nVidia learned their lesson from Fermi.

It's very clear that GK104 is an evolution of the same design mentality that produced the 460 and 560. It's also very clear that the GK110 is an evolution of the same design mentality that produced the 470/480 and 570/580. I don't think nVidia had any intention of releasing the GK110 product to consumers any time this year once they started assessing actual fabrication of said products. I believe they said as much repeatedly. That said, I don't think that was their original intention back in the day and what I think most people would blame AMD for in this regard is not the fact that the nVidia high end wound up as "just" the GK104 product, but that AMD set the bar so low with the Tahiti line that nVidia was able to swing the GK104 as "high end."

I don't think it was designed to be high end. I think it was designed as a successor to the 460/560 line and they were going to release the "mid-range" for this year with them hopeful a dual-GPU card would fill the high end. Instead, AMD made them happy (and they said that, too, at the time of the R7970 launch) with their performance levels being so unimpressive as to help nVidia justify bumping the price and branding up on a card they were about to release as 670 or lower.

That'd explain why nVidia is making more money than AMD currently, btw. nVidia is getting $500 regularly for a GPU that costs considerably less since it was designed initially to be in cards selling for $200-$300. The sheer amount of RAM the default config should help you get that. Meanwhile, AMD's busy producing a larger GPU and charging less for it, probably hitting a point far closer to their cost to manufacture than they'd like.

My point is basically that I think AMD set the bar very low and nVidia was able to capitalize on that gleefully by being able to mark their product up to higher levels than they'd dreamed possible. From what I remember, last year a lot of the rumor sites were saying GK110 was looking late, looking possibly going into 2013, and looked unlikely to have any effect at all on the consumer market. More to the point, I remember having the distinct impression that nVidia was going to try and weather the storm with mid-range products and dual-GPU as the high end. That's exactly what they've done, except instead of "weathering the storm" they've pwned the storm.

This is all a direct result of AMD's failure to produce a card that pushed the high end. nVidia had what they had and their card was designed to satisfy the mid-range, but suddenly they looked like prophets given their performance level and the way it matched AMD performance in the high end. Neither company produced cards that really pushed the boundaries in the way we're used to, so each company focused on their strengths (ironically having swapped strengths since last generation. Ie., Suddenly, AMD cares only about Compute and nVidia cares only about efficiency). But GK110 was too big and complicated to produce reliably on a relatively new 28nm process.

It was meant to be the high end, but nVidia learned from their mistakes in the past (ie., Fermi in particular) and instead of doing what they did with Fermi, delaying and waiting, they planned ahead. They knew they could use the mid-range to keep the boat afloat while waiting for the technology to catch up to GK110's complexity to fabricate. In the meantime, they could use on-card SLI to keep up with AMD when AMD decided to bring it.

Except AMD didn't bring it. They were a wet noodle. Suddenly, nVidia was so smug they couldn't resist saying so in response to AMD performance benchmarks, admitting they thought AMD's performance levels would be higher. That was early this year.

So now, why WOULD nVidia rush to bring out a complex GK110 part to face down a minor performance advantage in a few benchmarks by the 7970GE this close to the next generation of cards? Why when they can still charge $400 for 670 and $500 for 680 versus Radeon cards in their freefall pricing? Clearly, that would be the most stupid thing in the world to do. Especially when GK110 is so complex they'd take a beating trying to keep it reasonable in pricing.

Instead of pushing out cards with bad fab and/or leakage, having to hack off parts of the chip (Fermi) to get them out the door, they can just shrug and say, "It'd cost us an arm and a leg. After Fermi, we need to milk the market for a while." So they'll just use their smaller, cheaper, more focused part until such time as fabrication catches up to the complexity of the GK110. They realized this was what they'd have to do before they saw the Tahiti performance benchmarks. What those benchmarks let them do, though, is realize they could still have high end pricing. Before that, they seem to have intended the GK104 series to be at mid-range pricing with what became the 690 being the high end if they needed it to match whatever AMD produced.

And if you think nVidia would release a GK110--a whole new chip--just to win a couple of benchmarks (the ones they're not already matching or winning) against the poorly named and launched Radeon 7970GE, then I think you really need to reassess. AMD hasn't given them a good reason to push the Gk110 harder to be fabricated. Everything this year has just proven to them they can sit on the GK104 and its successors for the near term because AMD doesn't have anything anywhere close to the GK110 level, so they don't need the Fermi-like tradeoffs of performance for more heat.

I suspect nVidia is realizing wholly by accident what AMD professed several years ago. It is better to have a smaller GPU less generically awesome but razor focused on the core usage model of discrete GPU's than it is to have a Fermi or Tahiti-like chip. nVidia happened upon it by accident when GK110 turned out to be a whale of a chip to fab. That's why they pushed ahead with Gk104 hard and that's why when Tahiti showed up and they were like, "That's it?" you could practically read the palpable joy in the way they said it.

It's like gearing up for the fight of your life, knowing you twisted your ankle on the way to the fight, but arriving to find your fighter is a one-armed man. Suddenly, the fight's in your favor. That's why suddenly all the rumors from last year that had been saying, "GK110 not coming next year, but nVidia mid-range is" turned to, "nVidia mid-range is now being renamed 680 because they can match performance."

Hell, I remember reading forums of people saying, "There's no way nVidia's producing a mid-range card that can match the 7970!" The rumors were that pervasive. And whaddya know? That's exactly what nVidia did, except they figured if they had a card that matched the high end bar set by AMD then they should get the pricing of it, too.

So, I think AMD probably would have been better served depleting any excess inventory they had in the channel (and there were a lot of 68xx, 69xx cards for sale for a long time after the 78xx and 79xx series) with solid pricing and showing up in June with truly refined drivers (12.7-like), higher clocks, and pricing that matched up to slightly higher than what they have now.

RS likes to talk about "First Mover" being a great strategy, but I suspect somewhat lower prices coming with awesome drivers that provided Geforce-beating performance with reviews all saying uncategorically, "Radeons are faster with great power utilization/efficiency and more RAM for lower prices!" would have made the whole line do much better against nVidia than what's happened.

AMD released their cards before they had proper drivers for them. That gained them a reputation for lower performance than they have now, worse power efficiency numbers they really have, a reputation for Crossfire problems that continues to this day, and let nVidia get a few thousand reviews all saying "Kepler is incredibly more efficient than Tahit yet performs the same and costs an equivalent amount! You'd be stupid to buy the hotter card." That legacy continues to haunt the 7970 series. This is part of the reason for the 7970GE launch, but even that was tainted by AMD's baffling decision to pre-launch the card months in advance and to use the worst possible cards with the highest power usage they could find to do that pre-launch. By the time the real cards showed up, those reviews were not indicative of the final product, but good luck finding any reviewer going back to edit their reviews on that fact.

Newer reviews might say otherwise, but few care about those reviews. Most are reading that initial review where the whole card, its technology, its legacy and history, and all its promise are described in minute detail. Remember the "first reviewer advantage" is always to the first few reviews of a given product and what they say about said product. That defines the product and only incredible shifts can change that early perception once a perception has set in.

What's the first thing people remember about Radeon 7970? They launched at high prices for low performance gain? First thing about 680? Great efficiency, much better than R7970 and pushed the Radeon into a freefall of pricing. 670? 95% of 680 for 20% lower prices. R7950? Overpriced until very recently. Etc, etc.

Everything about the Radeon line is tainted by those early reviews using drivers that put these cards in a bad light. If you google these cards, more often than not you get reviews from early this year. Not everyone knows the little things, like 12.7 catalyst drivers being truly awesome and helping the cards push past the Kepler products in many ways. Or that 7970GE cards by the best OEM's are actually awesome in efficiency.

They don't know these things because AMD did such a crappy job of a first impression and nVidia used that crappy impression to make their first impression sterling by comparison. AMD seems to have learned their lesson if their delaying the 8xxx series is any indication.

But just because nVidia doesn't apply the GK110 or its successors as the high end of the consumer division next go-round, that does not prove to me that they NEVER intended it to take on that role. They don't NEED it to take the role. Having the high end crown matters not at all if you're able to charge more for your high end card and sell out of them even while your opponent wins a few more benchmarks and yet has to sell their card for $100 less to lure anyone to buy it. I think they intended to push for GK110 harder, but once it became clear it was unnecessary, they put de-prioritized it. It just wasn't required. Nothing in the consumer market really requires it and nothing AMD's making comes close to making the competition necessary.

So now, nVidia has no motivation to do anything differently. Fermi ate up a lot of profits and why should they want that to be repeated this go-round? They can take their time. This generation has been exceptionally kind to them in every way. They could release their cards as slowly as they liked, they could launch mid-range cards at high end pricing and still be called the winner, keep the channel well stocked and yet sell out dramatically across the board while charging more for less performance than their competition.

And all thanks to some really drab reviews and horrible word of mouth (all those former Radeon users who sold their Radeon's to get 680's and 670's to escape the pre-12.7 crap). Much of this was fueled at least in part by poor launch planning by AMD and a legacy of bad drivers still pungent from late last year (especially at Rage's launch) in the build-up to the Radeon 7970 launch when the driver teams had to focus on the very new and different GCN architecture to the detriment of the older Radeon products. It didn't help that Crossfire was completely broken in their first few drivers for the 7970, either.

Welcome to the forums. I welcome the inevitably epic dialogue you and RussianSensation will have in VC&G. :thumbsup:
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I don't think anyone doubts that nVidia had a larger (GK100) more powerful GPU planned for their top model. The BS is that somehow Tahiti or AMD is anyway responsible for GK100 not making it to market.

nVidia, all by themselves, realized they couldn't have the GK100 manufactured. Whether it's because the process couldn't handle it, or because it would have been too power hungry, or some other internal reason, no one is saying.

GK100 was scrapped or renamed to GK110. It's still not commercially viable, even in the pro market (What I mean by pro market is workstations) where they could get several thousand dollars for them. If AMD could have somehow managed something that nVidia couldn't and made a +500mm^2 chip in commercially viable quantities <300W and released it, nVidia still would not have been able to release their super chip any sooner than they are.

In the end, both companies built the most powerful chips they could on the 28nm process. Remember the GK104 was later than Tahiti and in short supply for quite a while after release. Why would anyone think that they would have released GK100 if they needed to in order to compete with Tahiti and they only scrapped/delayed it because they didn't need it?
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
I believe after Fermi NV focused mainly on power usage.They got rid of die spaces which has almost nothing to do with 99.9% of the gamers and so far they are quite successful business wise.I don't know why they will get back to a earlier concept they have already scrapped for this gen.They will probably make a new architecture like GK114 for the consumer parts.On topic it seems the engineering department will be most affected in the upcoming job cuts, a shame really.AMD still doesn't understands where it needs to trim the fat.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I believe after Fermi NV focused mainly on power usage.They got rid of die spaces which has almost nothing to do with 99.9% of the gamers and so far they are quite successful business wise.I don't know why they will get back to a earlier concept they have already scrapped for this gen.They will probably make a new architecture like GK114 for the consumer parts.On topic it seems the engineering department will be most affected in the upcoming job cuts, a shame really.AMD still doesn't understands where it needs to trim the fat.

They could be cleaning house in the CPU part of the business. While that's not a good thing it might not adversely effect the GPU side.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
From what I have read they are not very happy with the ATI guys it seems.So they always gets the short end of the stick.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
As usual, always defending NV and missing the context.

Since people are pinning the problems on AMD's finances due to HD7000 series, I thought you'd want to read up the data again that AMD moved discrete GPUs last quarter - you know they sold, not set on store shelves.

Q2 2012 Discrete GPU market share
AMD = 37.8%
NV = 61.9%

Q3 2012 Discrete GPU market share
AMD = 40.3%
NV = 59.3%

Since hardly anyone in this sub-forum cares about mobile GPUs, not sure what your point is? Sorry, AMD gained market share at NV's expense, no need to make excuses why NV lost it. Check this chart.

My Point? My point is the reality. Something you ignore in the last few weeks...

You showing only the desktop market which is hilarious. <=50% of all PCs which sold in one quarter are desktops. =>50% of nVidia's Geforce revenue comes from the mobile market. So it's easy to see in which market nVidia sold nearly all of the 28nm supply.

And in Q2 they had only two 28nm products for the desktop: GTX680 and GTX670.

In the end nVidia only cares about the overall market. And here they won market share Q-Q and Y-Y:
discrete
AMD 14,000 units, 42.9% share, -2.1% from Q1, -2.5% from Q2-2011
nvidia 18,600 untis, 57.1% share, +2.1% from Q1, +2.5% from Q2-2011
http://investorvillage.com/smbd.asp?mb=476&mn=244502&pt=msg&mid=12013727
 
Last edited:

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
What most people are saying that if AMD is running into power consumption problems with Sea Islands, why would nV not have the same issues on 28nm?

The difference is that nVidia has proven time and time again that they don't care nearly as much when producing a beat GPU about the heat issue as AMD does.

The discussion on GTX780 specs on our forum in earlier months hypothesized whether or not NV would launch a fully unlocked GK110-based consumer GeForce card.

Tesla isn't GeForce.

How did it work out for NV and its 250W average, 270W peak GTX480 card?

270 watt peak?

http://www.anandtech.com/show/2977/...x-470-6-months-late-was-it-worth-the-wait-/19

372 watt peak. Another 100 watts is a rather large amount of room to play.

Why is it that BenSkywalker says NV should blow 250W average because that's what he wants, that NV is suddenly going to throw out the performance/watt advantage Kepler has and throw it all out the window?

Never said what I wanted, I said was ignorant to think nVidia wouldn't do it. I also don't see how you think performance/watt changes just because wattage increases? As long as the performance increase is in proportion, then it's the same performance/watt.

The other problem with your projection is it looks at the most extreme case.

I have stated the 8970 could be faster then the 780, if GK110 isn't ready for the consumer market, if the 8970 comes out stronger then expected, then it is in fact the reasonable conclusion. That said, we know that nV is building chips, right now, that have *significantly* more power then anything we have seen this generation.

This entire generation was built on a full node drop, we have never seen that before. Given as much, the performance increases that we have seen versus the prior generation is *terrible* from *both companies*. I have repeatedly stated that both AMD *and* nVidia screwed up this generation, you just only defend with righteous fury your beloved AMD :)

In your case, the 50-100% faster than 8970 claims are so far out to the extreme range, your scenario allows for no compromises.

GK110 hits reasonable yields. That isn't exactly extreme. Keep in mind, nVidia isn't using hot clocks now, and they have dropped a full node, power usage should be *much* lower then Fermi all else equal. The GTX 480 ran its' shaders at 1.4GHZ, 40% lower clocks on a full node drop...... it's fairly reasonable to expect lower power.

With rumors that GK114 will be 15-30% faster than GTX680, every day we get new information, your projection loses more and more credibility and the points I brought up earlier regarding die size limitations from a profitability and power consumption perspectives are being used in the same rumors explaining why NV won't launch GK110 as a consumer GPU.

Charlie is parroting your thoughts, it wouldn't surprise me in the least if he stole them from you in the first place. Charlie may end up being right, and I may end up being struck by lightning today.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
This is part of the reason for the 7970GE launch, but even that was tainted by AMD's baffling decision to pre-launch the card months in advance and to use the worst possible cards with the highest power usage they could find to do that pre-launch.

The baffling decision may of been this: AMD may of felt that the nVidia bigger die was needed to compete with their HD 7970 and the GK-104 would compete more-so with the HD 7870.

Did anyone really believe that the smaller die, smaller bus from nVidia would carry such a performance punch? It surprised virtually every one, possibly AMD as well.

So, being a predator and aggressor, maximize return, thinking that possibly the enthusiast market was AMD's 'till the big nVidia die may of been the baffling decision.

The GK-104 simply surprised AMD and have been very reactive ever since to this competition, imho. When nVidia did see the performance of the HD 7970 and its price-point, they were probably dancing in the halls at Santa Clara.

BTW, nice post!
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
The GK-104 simply surprised AMD and have been very reactive ever since to this competition, imho. When nVidia did see the performance of the HD 7970 and its price-point, they were probably dancing in the halls at Santa Clara.

BTW, nice post!

if they could humiliate AMD, they certainly would...
if they could but don't want to, then they are shooting in theyr own foot...

i mean, they are choosing to "tie" the war, to "win" some rounds... really? who does that?
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Considering one's bigger die is slow to market and may never enter the GeForce Fray -- a tie was surprising. However, with process and driver maturity -- AMD is fighting hard and offers the fastest single GPU and leading 28nm price/performance top-to-bottom, with compelling strengths like more default ram, over-clocking flexibility and scaling, etc.. Think it is great to see this from AMD. Hopefully, this takes sales away from nVidia so more gamers may enjoy improved price/performance from not only AMD but nVidia as well.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
if they could humiliate AMD, they certainly would...
if they could but don't want to, then they are shooting in theyr own foot...

i mean, they are choosing to "tie" the war, to "win" some rounds... really? who does that?
Pretty much. It's interesting to see NVIDIA's marketing machine play off their failures as AMD's fault, and even more so that some people actually buy into it (literally I suppose).

The simple fact is that my 7970 has been the best card I've ever owned. Since January of last year I've had insane performance in games and mining on my 7970 alone has made me a $100 of dollars. Simply put, anyone who waited more than a day after January 9th, 2012 to buy a 7970 lost out. Maybe we're seeing a lot of that bitterness and resentment now surface.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Simply put, anyone who waited more than a day after January 9th, 2012 to buy a 7970 lost out.

Someone that waited a bit of time received cheaper HD 7970's and more performance with mature drivers. How did these type of buyers lose out?
 

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
I did, still have trouble following you, sorry!
I somewhat agree with MrK6, I was one of the early adopters of the 7970 and put mine on water right away pushing it up to 1200-1250. People who were waiting on the GTX 680 expected a beat down like no other considering 7970 was like 30% faster than a GTX 580 but once the GTX 680 actually released it was pretty much the same thing with a different sticker on it.

As someone who runs 2560x1600 resolution every drop of performance counts and since the 7970 allowed me to run higher settings in a lot of the games I played it was well worth the cost for me to get it 3 months before the GTX 680 launched since in those 3 months in enhanced my gaming experience.

I also bought mine for $500 on launch week so I really couldn't let that deal get away lol.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
54
91
Pretty much. It's interesting to see NVIDIA's marketing machine play off their failures as AMD's fault, and even more so that some people actually buy into it (literally I suppose).

The simple fact is that my 7970 has been the best card I've ever owned. Since January of last year I've had insane performance in games and mining on my 7970 alone has made me a $100 of dollars. Simply put, anyone who waited more than a day after January 9th, 2012 to buy a 7970 lost out. Maybe we're seeing a lot of that bitterness and resentment now surface.

Interesting is it? We all know that AMD didn't do anything to be in the position it is in today. Nobody is casting dispersions their way that I could tell. Especially not Nvidia fans.

There is another simple fact you might find interesting. People who bought Nvidia 680's are probably just as happy, if not happier with their purchases. If you do not like this simple fact, you may want to ease up on throwing yours around needlessly. 7970 is a fine GPU. But it's not the only one.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Someone that waited a bit of time received cheaper HD 7970's and more performance with mature drivers. How did these type of buyers lose out?
The ~5% performance boost through drivers is unnoticeable in gameplay. By the time 7970's got cheaper, I mined more than enough bitcoins to make the difference.
I somewhat agree with MrK6, I was one of the early adopters of the 7970 and put mine on water right away pushing it up to 1200-1250. People who were waiting on the GTX 680 expected a beat down like no other considering 7970 was like 30% faster than a GTX 580 but once the GTX 680 actually released it was pretty much the same thing with a different sticker on it.

As someone who runs 2560x1600 resolution every drop of performance counts and since the 7970 allowed me to run higher settings in a lot of the games I played it was well worth the cost for me to get it 3 months before the GTX 680 launched since in those 3 months in enhanced my gaming experience.

I also bought mine for $500 on launch week so I really couldn't let that deal get away lol.
Pretty much the same. Moving from an overclocked 6950 to an overclocked 7970 pretty much doubled my performance in most games: https://docs.google.com/spreadsheet...FlmUGVQMUZReHI0bFg4czR1Z3AwdXc&hl=en_US#gid=1 . The doubling of gaming performance, including a 3 1/2 month head start, coupled with the fact that this card has paid for itself and netted me a couple $100 profit is nothing short of awesome.
Interesting is it? We all know that AMD didn't do anything to be in the position it is in today. Nobody is casting dispersions their way that I could tell. Especially not Nvidia fans.
How does AMD's financials have anything to do with how awesome the 7970 is? If you'd care to explain I'm all ears, otherwise I assume you're just purposefully deflecting.
There is another simple fact you might find interesting. People who bought Nvidia 680's are probably just as happy, if not happier with their purchases. If you do not like this simple fact, you may want to ease up on throwing yours around needlessly. 7970 is a fine GPU. But it's not the only one.
I'm sorry it upsets you that the 7970 is so awesome. If you'd like to point out a way the GTX 680 makes so much money it pays for itself and then some, I'm all ears for that as well.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Around here, it's not unusual for threads about specific video cards to degenerate into a battle of the balance sheets and financial statements, but it's more unusual to see threads about balance sheets and financial statements degenerate into a battle of specific video cards.