NVIDIA preparing four Maxwell GM204 SKUs (VideocardZ via S/A)

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Saylick

Diamond Member
Sep 10, 2012
4,106
9,598
136
%
What's the conspiracy here? Unless someone is insinuating that AMD and NVIDIA are colluding and price fixing, to me it just looks like it's getting harder and harder to increase performance year over year.

There is no conspiracy or price fixing. The issue is that next generation graphics cards aren't occupying the "high-end" price point of the previous generation but are instead simply added to the existing line-up.

GTX285 was $500 when it launched and was replaced by the GTX480, which was $499 when it launched, so that's fair. GTX580 was only 20% faster than the 480 but nVidia launched it at $500 and that meant the 480 dropped in price to $420. GTX680 was 30% faster than GTX580 and was also sold for $500, which is fair. Then, out of the blue, TITAN was launched at $1000 which sold surprisingly well to nVidia's liking so they launched GTX780 at $650 instead of $500 since they realized people were willing to pay out the arse for higher performance. Notice the break in trend here? If history were to repeat itself, GTX 780 would have been released at $500-$550 and would have completely superseded the 680, which would have been relegated as a GTX 770 at $350-$400.

Now, if you look at AMD's side, you have the 4870 launching at $300, which is a steal to begin with, followed by the 5870 at $379. These prices were sub-$500 and they should be since nVidia had something faster at the $500 price range at the time. The 6970 launched at $369 which is more or less in line with what the 580 did with the 480; it offered a small increase in performance without upping the price. Of course, we then had the 7970 which was launched at $550 which AMD justified in relation to the performance of the existing 580. The difference here is that AMD had the performance crown and they knew that they would have it for a while so they did what nVidia would have done and raised the price. As soon as the 680 launched, the 7970 fell in price. 290X launched at $550 which is again in-line with "traditional high-end" pricing while they offered tremendous perf/$ with the 290 at $400.

Given the above, who is to blame for the price increases?
 

BD2003

Lifer
Oct 9, 1999
16,815
1
81
%

There is no conspiracy or price fixing. The issue is that next generation graphics cards aren't occupying the "high-end" price point of the previous generation but are instead simply added to the existing line-up.

GTX285 was $500 when it launched and was replaced by the GTX480, which was $499 when it launched, so that's fair. GTX580 was only 20% faster than the 480 but nVidia launched it at $500 and that meant the 480 dropped in price to $420. GTX680 was 30% faster than GTX580 and was also sold for $500, which is fair. Then, out of the blue, TITAN was launched at $1000 which sold surprisingly well to nVidia's liking so they launched GTX780 at $650 instead of $500 since they realized people were willing to pay out the arse for higher performance. Notice the break in trend here? If history were to repeat itself, GTX 780 would have been released at $500-$550 and would have completely superseded the 680, which would have been relegated as a GTX 770 at $350-$400.

Now, if you look at AMD's side, you have the 4870 launching at $300, which is a steal to begin with, followed by the 5870 at $379. These prices were sub-$500 and they should be since nVidia had something faster at the $500 price range at the time. The 6970 launched at $369 which is more or less in line with what the 580 did with the 480; it offered a small increase in performance without upping the price. Of course, we then had the 7970 which was launched at $550 which AMD justified in relation to the performance of the existing 580. The difference here is that AMD had the performance crown and they knew that they would have it for a while so they did what nVidia would have done and raised the price. As soon as the 680 launched, the 7970 fell in price. 290X launched at $550 which is again in-line with "traditional high-end" pricing while they offered tremendous perf/$ with the 290 at $400.

Given the above, who is to blame for the price increases?

The people willing to buy the super high end cards?
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Let's see the 580 came out Nov. 2010 for $500. Titan came out Feb 2013 for $1000. Inflation isn't the reason. Do you have any idea what the inflation rate is for the US, for example? 2%? 3%?

Try listening to what's being said instead of trying to insult someone because you don't agree.

What to guess how much the 7800GTX 512MB, 6800 Ultra Extreme, 8800 Ultra costed back in the day?

But I think this is getting way off topic. One thing im curious to know is that, will this part be DX12 capable? or will this be one of the reasons to buy the 2nd generation Maxwell on 20/16nm tech?
 

BD2003

Lifer
Oct 9, 1999
16,815
1
81
What to guess how much the 7800GTX 512MB, 6800 Ultra Extreme, 8800 Ultra costed back in the day?

But I think this is getting way off topic. One thing im curious to know is that, will this part be DX12 capable? or will this be one of the reasons to buy the 2nd generation Maxwell on 20/16nm tech?

Aren't all the cards going back to like Fermi DX12 capable?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
What to guess how much the 7800GTX 512MB, 6800 Ultra Extreme, 8800 Ultra costed back in the day?

But I think this is getting way off topic. One thing im curious to know is that, will this part be DX12 capable? or will this be one of the reasons to buy the 2nd generation Maxwell on 20/16nm tech?

DX12 feature set 11_0.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
What's the conspiracy here? Unless someone is insinuating that AMD and NVIDIA are colluding and price fixing, to me it just looks like it's getting harder and harder to increase performance year over year.

And people forget dGPU market is shinking, R&D cost goes up, IC design cost goes up. Something gotta pay. Its just a matter of time before there is only 1 dGPU maker unless something radically happens.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Better yes - bigger? not necessarily. 6970 to 7970...

Pretty sure 4870 was 55nm not 65 and there was no hd 5890. if you meant 4890 that was 55nm as well and the same chip as 4870. I dont disagree with your overall point necessarily, just that the market has/had different expectations for what the top chip in a given node would be - size wise for the two vendors..... Hawaii was a surpise GK110 was not

as you say, I will indeed be sitting on the sidelines till the big die's come out......(nvidia - a certaintiy) (AMD - maybe)

The 4870 and 4890 were both 55nm. And it doesn't matter that there wasn't a 5890. It's just a name. It's just a name. You're only making up excuses to try to justify your rationale.

The HD4870 was paraded as AMD's flagship chip but was replaced in the same generation with a bigger faster chip. Cypress was paraded by AMD as it's flagship chip but was replaced in the same generation with a bigger, faster chip. Tahiti was paraded as AMD's flagship but in the same generation was replaced by a faster, bigger chip.

Nvidia does it one time and the amount of butthurt running through the internets is L.O.L. Now Nvidia is prepping to supposedly do it again, but no one here knows just how fast GM204 will end up, no one here knows when we might see a bigger, faster Maxwell chip, and no one here knows if Nvidia will even release a bigger, faster Maxwell chip as a Geforce product.

Basically everyone here pretends like AMD has never done this strategy, when in fact they've been doing it every generation since 2008 AND everyone here acts like they know exactly what Nvidia's performance, release strategy, R&D, and future products be like.

AWESOME.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
When you have a duopoly they don't even have to be dumb enough to need any collusion. I'd love to see a 3rd player.

A 3rd player, unless you count Intel is not gonna happen. The dGPU segment will shrink to one player in the relative near future, before ending up as none in the long run.

Since none of the 2 dGPU makers seems to be able to afford anything below 28nm. And we will sit on 28nm for a couple of years more. Then it will be interesting to see what AMD will serve up to counter Maxwell tho. The best uarch will win the chair dance it seems.
 

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
The HD4870 was paraded as AMD's flagship chip but was replaced in the same generation with a bigger faster chip.

False - 4890 was the exact same chip, just faster clocked. It was not bigger.

Cypress to cayman was an evolution from VLIW5 to VLIW4 (not exact same generation) whereas Nvidias bigger chips are the exact same architecture. And again. they hadnt done a HUGE 400mm+ chip since R600.

Again - I don't even necessarily disagree with your point overall, but your "facts" are a bit off
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
It is only slightly more than GK104 too with 195W TDP :)

My core calculation:
GTX 780 Ti:
2880 cores

GTX 880 Ti:
2560 cores (exactly 20SMM) x 1.35% better efficiency = 3456 cores.
Meaning a GTX 880 Ti with 2560 Maxwell cores will perform like 3456 Kepler cores.

If GTX 780 Ti and GTX 880 Ti are clocked the same:
3456/2880 = 1.2 = 20% better performance

If GTX 880 Ti is clocked at 1080MHz like GTX 750 Ti (Maxwell):
3456 cores x 1080 MHz = 3732580
Devided on
2880 cores x 1000MHz = 2880000

= 1.3 = 30% better performance


I didnt use TDP because frankly I dont know what the TDP will be.
You say it is 70% more efficient per watt compared to Kepler which is true looking at the Techpowerup results.
But that makes me wonder if TDP are lower than 225W because of that. Wonder if Nvidia aim for a +50% 225W or a 195W +30% instead. Something tells me it the latter because its meant to replace 195W GK104 and the chip is designed to be in notebooks as well. Can`t go too high or it will be difficult to scale down to 100W mobile GPU.

Guess we will have to wait to find out :)

Prior to this GM204-only theory, I was under my own assumption that GM106 would be the new high end mobile part. But if this new rumor turns out true, you have an interesting point on the mobile front. I will say, though, that Nvidia managed to get GF100 into a laptop sooooo anything is possible. (I still think there is a likely possibility of a GM206 on 28nm. I remember people thinking there wasn't going to be a GK106 because it lagged a little bit behind the rest of Kepler and that Nvidia was going to use GK104 to fill all the gaps between GTX680 and GTX650).

But I think the average power consumption of GM204 will be higher than GK104's gtx 770. TPU shows gtx770 averaging 165 under load. I think the new chip will be about 185 watts on a typical load. If perf/watt scaling remains consistent with GM107, that puts it right at slightly over 100% faster than GTX770.

But that all depends on how balanced the chip is with respect to ROP performance, bandwidth, L2 cache, etc.
 
Last edited:

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
EDIT: nvm, I shouldn't have replied. I put that guy on my ignore list for a reason
 
Last edited:

thilanliyan

Lifer
Jun 21, 2005
12,066
2,279
126
Given the above, who is to blame for the price increases?

AMD obviously. When nVidia raises prices, it is AMD's fault because they are incompetent. And when AMD raise prices, it is AMD's fault because they are a greedy capitalist company. :p

Seriously though, as long as $1000 video cards sell in decent quantities, they will exist, and people will think they are getting a bargain for something a bit slower that costs $650 say.

Having said that, it is getting more expensive to produce these cards from what I understand, so I expect prices to slowly go up.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Prior to this GM204-only theory, I was under my own assumption that GM106 would be the new high end mobile part. But if this new rumor turns out true, you have an interesting point on the mobile front. I will say, though, that Nvidia managed to get GF100 into a laptop sooooo anything is possible. (I still think there is a likely possibility of a GM206 on 28nm. I remember people thinking there wasn't going to be a GK106 because it lagged a little bit behind the rest of Kepler and that Nvidia was going to use GK104 to fill all the gaps between GTX680 and GTX650).

But I think the average power consumption of GM204 will be higher than GK104's gtx 770. TPU shows gtx770 averaging 165 under load. I think the new chip will be about 185 watts on a typical load. If perf/watt scaling remains consistent with GM107, that puts it right at slightly over 100% faster than GTX770.

But that all depends on how balanced the chip is with respect to ROP performance, bandwidth, L2 cache, etc.
I can give 3 reasons why GM204 will end up in notebooks as well:

1. Zauba.com shipping information seems to reveal that GM204 is going inside notebooks.
XrUysHd.jpg


2. GK104 was mobile`s highest performer from Nvidia in 600 and 700 series. Despite based on a 195W GTX 680, they managed to release a full GK104 within the 100W envelope.

3. Approx die size of GM204 is about 430mm2. There have been one GPU bigger than that before: GTX 480M based on GF100.

Comparisons with previous mobile GPUs:
GTX 480M: 526mm2
GTX 580M: 332mm2
GTX 680M: 295mm2
GTX 880M: 430mm2

------------------------------------------------------------

The reason why I think the TDP of GTX 880 Ti (Full GM204) will be a little higher than GTX 680 is because the Engineering sample of both seems to reveal just that.

Here is the engineering sample of GK104 with the same stacked power pins like we saw on GM204. GK104 had 6+6 stacked. And the engineering sample pictures below had the 6pin removed. but the slot is still there as evidence it was maybe used previously for testing.
ULWNA6E.jpg



Here is GM204 also with stacked power pins. But it have 2 additional pins compared to GK104. 8+6. The power plug you see on the side is clearly for testing measures because you notice a button next to it. Thats to switch off that power plug during testing. Wouldnt surprise me it is to test out a feature for future GPUs with ARM processors. And the pixelated blured out field on the left and right of the GM204 PCB is extra features which are not included on the GM204 but on upcoming cards (ARM).

48FStbO.png


TL:DR:
Full GM204 have slightly higher TDP than GTX 680. 225W? Or maybe they have learned from previous cards and decide to allow better overclocking with more power available?
Mobile GM204 will have 1-2 SMM disabled to fit in to the 100W limit.
Alll above is mostly my speculation :D
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
TL:DR:
Full GM204 have slightly higher TDP than GTX 680. 225W? Or maybe they have learned from previous cards and decide to allow better overclocking with more power available?
Mobile GM204 will have 1-2 SMM disabled to fit in to the 100W limit.
Alll above is my speculation :D

Yeah I already mentioned GF100 being in a notebook. ;) Everything you say makes perfect sense. The only thing about the whole situation that blows my mind is that the performance delta between GM204 and GM107 is going to be huge, even between the lowest binned GM204 and highest binned GM107. Also, Steam shows that the GTX660 is the most-sold Kepler model. GK106 raked in the sales on the desktop. If there truly isn't going to be a GM206, I wonder if Nvidia will bump the TDP, clocks, and memory speed of GM107 up when it gets rebadged with the 800 series.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,818
1,553
136
False - 4890 was the exact same chip, just faster clocked. It was not bigger.

It was functionally the same chip, but was slightly bigger thanks to the addition of the decap ring. That said, you are correct in essence if not technicality.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Yeah I already mentioned GF100 being in a notebook. ;) Everything you say makes perfect sense. The only thing about the whole situation that blows my mind is that the performance delta between GM204 and GM107 is going to be huge, even between the lowest binned GM204 and highest binned GM107. Also, Steam shows that the GTX660 is the most-sold Kepler model. GK106 raked in the sales on the desktop. If there truly isn't going to be a GM206, I wonder if Nvidia will bump the TDP, clocks, and memory speed of GM107 up when it gets rebadged with the 800 series.

Since SemiAccurate says 4 different GM204 SKU`s are coming, I can`t help but wonder if Nvidia was kinda pissed at TSMC inability to get 20nm production up to allow Nvidia to produce their GPUs on that node. There was some rumors earlier where Nvidia had to abandon their 20nm designs and port GM204 to 28nm instead because of the 20nm situation.
Which is why we have seen so many recent GM204 entries in Zauba.com back and forth from USA HQ to India GPU department, all in desperation to try not to go too far away from their initial Maxwell roadmap.
So SemiAccurate may be accurate here and why the roadmap change (due to 20nm > 28nm transition) was worthy of a new article.

So to save money on testing and new chip designs while maybe prepping up to 16nm FinFET 1 year down the line (Q4 2015 I think), they rather do many 800 series GPUs with the GM204 instead of doing GM206, GM204 and perhaps GM200.
So in my opinion I think 800 series will be just like normal Nvidia practice in mobile graphic cards.
Release full GM204 for one card, disable some SMM for the GPU below, go further with more SMM disabled for the one below that again and neuter it with disabling memory bus down from 256bit to 192bit as well. Plus cut away some ROPs.
That will ensure many GM204 SKUs but with wildly different performance.
And to patch in the huge space between GM107 and full GM204 like you say :)
 
Last edited:

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
@tviceman
re-chip-sales:GK104 destroyed GK106 on desktop.

I dont even have to look (again :D) that Steam chart to know that GTX 670,760 & Co. are much more prevalent than GTX 660/650 Ti/Boost.

Compared to GF104/114 GK106 was a letdown imho
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
False - 4890 was the exact same chip, just faster clocked. It was not bigger.

Cypress to cayman was an evolution from VLIW5 to VLIW4 (not exact same generation) whereas Nvidias bigger chips are the exact same architecture. And again. they hadnt done a HUGE 400mm+ chip since R600.

Again - I don't even necessarily disagree with your point overall, but your "facts" are a bit off

Nope, you are the wrong. A quick 10 second Google search shows you that the 4890 was larger and had more transistors. And you can label AMD's actions as evolution or incremental or whatever you want. The fact remains unchanged, AMD released bigger and faster flagship replacements on each of the last 3 nodes. And NO ONE pitched a fit like people did over gk104 and now with the upcoming gk204. Hypocrisy at its finest.