Who thinks Maxwell is getting a rebrand/rebadge?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
Seeing as most things point to mid 2016 for 16nm FF, even TSMC's usual optimistic claims do as they are always wrong ():), they will probably do some sort of rebrand.

I would assume they are tapped out on 28nm now with the size of GM200 and making an even bigger GM204 basically means making a GM200.

My guess is 6 months from now we see a GM200 that is not cut down with 6GB and some sort of rebrand of the 980 with 8GB of VRAM or something of the sort to fill the year of time between now and when 16nm products are on the shelves.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Cost, cost and cost. You will be greatly dissapointed in 14/16nm.

My prediction above was actually quite conservative. I am assuming that an immature 16nm FinFET node might have a per-transistor cost over 30% higher than 28nm. (That would translate into the process being 2.6 times as expensive per square millimeter, assuming double the transistor density.) Even with that, as I noted, a hypothetical 150 mm^2 Pascal GPU would be less expensive for Nvidia to produce than the current GM204. And the AIB partners would have lower costs to make the full card than they do now with GTX 970, because the card could use a smaller PCB, lower rated components in the power delivery, no PCIe connectors, smaller cooler, and so forth. Given all that, retail pricing at $249 seems like it would be the sweet spot to maximize overall profits by balancing sales figures against per-unit income - especially since there is really nothing appealing on the current market at that price point except for old stock of R9 290 that will be sold out soon.

Given your track record of predictions ("no AMD APUs in PS4/XB1"), I'll stick with my own estimates. I suppose we'll find out in a couple of quarters which of us is right this time.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
My prediction above was actually quite conservative. I am assuming that an immature 16nm FinFET node might have a per-transistor cost over 30% higher than 28nm. (That would translate into the process being 2.6 times as expensive per square millimeter, assuming double the transistor density.) Even with that, as I noted, a hypothetical 150 mm^2 Pascal GPU would be less expensive for Nvidia to produce than the current GM204. And the AIB partners would have lower costs to make the full card than they do now with GTX 970, because the card could use a smaller PCB, lower rated components in the power delivery, no PCIe connectors, smaller cooler, and so forth. Given all that, retail pricing at $249 seems like it would be the sweet spot to maximize overall profits by balancing sales figures against per-unit income - especially since there is really nothing appealing on the current market at that price point except for old stock of R9 290 that will be sold out soon.

nVidia cant charge GM204 or GM206 money for a GP107 part. Also what would you expect higher SKUs to be?

GM107 as a second generation 28nm product is 148mm2. GK107 was 118mm2.

Also remember gate utilization is lower on 14/16nm.

Given your track record of predictions ("no AMD APUs in PS4/XB1"), I'll stick with my own estimates. I suppose we'll find out in a couple of quarters which of us is right this time.

Why dont you list all the times I was right? We can compare notes on the Fury X launch if you wish. But thats not in the interest of yours is it? Try stay on topic next time instead of going for the person, you may avoid putting yourself in a position you cant get out of.

But it does say a lot that this is the only thing you can think of.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Seeing as most things point to mid 2016 for 16nm FF, even TSMC's usual optimistic claims do as they are always wrong ():), they will probably do some sort of rebrand.

I would assume they are tapped out on 28nm now with the size of GM200 and making an even bigger GM204 basically means making a GM200.

My guess is 6 months from now we see a GM200 that is not cut down with 6GB and some sort of rebrand of the 980 with 8GB of VRAM or something of the sort to fill the year of time between now and when 16nm products are on the shelves.

The question is, what will OEMs demand. Another 28nm tour around the circle depends solely on them and not AMD or nVidia.

And there is an awful long time to Q4/2016 Q1/2017 for OEMs.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
nVidia cant charge GM204 or GM206 money for a GP107 part.

If it performs better than GM206 with lower power usage, why not? The vast majority of buyers don't even know what process node their card is on, much less the die size. This forum is one of the few places people talk about that stuff. Most buyers care about price, performance, features (including driver support), and whether the card can fit into the thermal and/or power requirements of their system.

Also what would you expect higher SKUs to be?

This discussion was in the context of a wider rebrand. The theory is that just as GM107 was dropped into an otherwise Kepler-controlled lineup, the GP107 could go in the middle of an otherwise Maxwell-dominated lineup.

From bottom to top, the GPU lineup would consist of:
GM108 (OEM/mobile only) -> GM107 -> GP107 -> GM204 -> GM200

Of course, GM206 would need to be discontinued if Nvidia followed this strategy since GP107 would render it irrelevant.

GM107 as a second generation 28nm product is 148mm2. GK107 was 118mm2.

Also remember gate utilization is lower on 14/16nm.

The size of GK107 had little or nothing to do with technical limitations; Nvidia had already released the much larger GK104 by the time it hit the market.

As for the gate utilization, could you please provide a source for this assertion?

Why dont you list all the times I was right?

I'm not denying you've ever been correct. But you tend to word your predictions as certainties rather than speculations, which in my opinion sometimes makes you come off as a bit arrogant. This, I think, is why people like to point out the situations where you were wrong.

I've learned the hard way that it is usually best to cover predictions with conditional statements: "I think", "it seems likely", "business sense would dictate", etc. In general, unless you have inside info, it's best to steer away from flat-out saying that this or that WILL happen in the industry.

Please try to take this as it is intended, as constructive criticism, and not a personal attack.

But it does say a lot that this is the only thing you can think of.

Well, another one that comes to mind was the incorrect prediction that Zen would be a cat core successor rather than a large-die product.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
The question is, what will OEMs demand. Another 28nm tour around the circle depends solely on them and not AMD or nVidia.

And there is an awful long time to Q4/2016 Q1/2017 for OEMs.

While anything is possible, I don't think it will be that late before FinFET dGPUs are released. Remember, the Samsung Galaxy S6 is already shipping with a FinFET SoC. I expect it's likely we will see the first FinFET GPUs in Q1-Q2 2016, though it may not be a complete bottom-to-top lineup for either company. I think AMD is probably going to want to be more aggressive than Nvidia, since AMD's 28nm products are far less competitive, and AMD has traditionally been more willing to dive into new process nodes (and memory technologies).

I do think it might be Q4 2016 - Q1 2017 until we see the largest die sizes (>500 mm^2) from either company.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
14nm Samsung yields are so bad that Apple will source 14/16nm from 3 foundries for the A9/A9X.

Now how far down the line do yout think GPUs are placed in terms of companies willing to pay?

Anyone with a 300% margin on a product is quite more willing than someone who doesnt.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
If it performs better than GM206 with lower power usage, why not? The vast majority of buyers don't even know what process node their card is on, much less the die size. This forum is one of the few places people talk about that stuff. Most buyers care about price, performance, features (including driver support), and whether the card can fit into the thermal and/or power requirements of their system.

Last time I checked nVidia wasnt in the charity business. And it seems you value wishes beyond economics.

This discussion was in the context of a wider rebrand. The theory is that just as GM107 was dropped into an otherwise Kepler-controlled lineup, the GP107 could go in the middle of an otherwise Maxwell-dominated lineup.

From bottom to top, the GPU lineup would consist of:
GM108 (OEM/mobile only) -> GM107 -> GP107 -> GM204 -> GM200

Of course, GM206 would need to be discontinued if Nvidia followed this strategy since GP107 would render it irrelevant.

nVidia isnt going to discontinue GM206, just because you want a "big" lowend die. Also the lineup suddenly got an unwanted gap. And you forget how cost prohibitive 14/16nm is.

The size of GK107 had little or nothing to do with technical limitations; Nvidia had already released the much larger GK104 by the time it hit the market.

Again, you completely forget cost.

As for the gate utilization, could you please provide a source for this assertion?

4.png


The key also lies here why Intel is the only company so far getting lower transistor cost on 14nm.


I'm not denying you've ever been correct. But you tend to word your predictions as certainties rather than speculations, which in my opinion sometimes makes you come off as a bit arrogant. This, I think, is why people like to point out the situations where you were wrong.

I've learned the hard way that it is usually best to cover predictions with conditional statements: "I think", "it seems likely", "business sense would dictate", etc. In general, unless you have inside info, it's best to steer away from flat-out saying that this or that WILL happen in the industry.

Please try to take this as it is intended, as constructive criticism, and not a personal attack.

Well, another one that comes to mind was the incorrect prediction that Zen would be a cat core successor rather than a large-die product.

You should read your own post and apply that to yourself. Then we wouldnt have this part of the conversation.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
How's about them Titan X, Geforce grid racks, and Tesla margins??? Super computers ain't gonna compute themselves!

That may not be enough. Remember more chip designs, more cost. So unless they all buy GP100 for all 4 million professional cards sold. Then you have an issue.

A single 14/16nm design will need to create 1-1.5 billion $ revenue just to pay for itself from the current projections.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
You guys can bicker all you want, if finfets are ready for production before this year is over, I see a new ~120mm2 chip coming from Nvidia in late Q1 or early Q2 of next year. Margins of notebook GPU's are fantastic as are margins on Nvidia's grid servers and Nvidia needs an update in this segment to stay comfortably ahead of Intel Iris Pro. A substantially more advanced node + a new architecture should yield at least a 2x perf/w increase over Maxwell, and as JDG1980 pointed out, would make any such <= 65 watt faster than GM206. As it stands now, GM206 is a great chip in and of itself but is getting very limited use. It's not used in any notebook computers, it's not used in any Quadro or Tesla products (that I am aware of), and it doesn't have any binned parts. It's currently only a 1 trick pony.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I dont think you are going to get GM206 performance at below 65W without HBM.
 
Last edited:

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
We might see a Maxwell 2 version of the 750 and 750Ti, other than that I don't expect new cards from NV until Pascal launches.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Because looking at nVidias projection for gflops/watt with Pascal doesnt leave an impression of that.

You mean this?
PascalRoadmap.jpg


First of all, that graph shows Pascal being a similar leap over Maxwell as Maxwell was over Kepler, but you can't use that graph to compare with graphics performance among lower-tier chips. That graph only shows Maxwell gaining by 67.5% over Kepler which may be true on the high end, but with the smaller chips there was a bigger disruption. GM204 is exactly 2x more perf/w than first gen GK104 and GM107 is 1.9x more perf/w than GK107. This was all on the same node, too.

Factoring in a new AND new architecture? Perf/w should double. In fact, it'd be disappointing if it doesn't AT LEAST double.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
We might see a Maxwell 2 version of the 750 and 750Ti, other than that I don't expect new cards from NV until Pascal launches.

That's the other part. I don't think it makes sense for Nvidia to implement color compression and DX12 hardware feature specs into GM107 at this point, but it goes to show how much room for improvement there is. GM107 was able to outgun the GK107 by 75-85% at the same bandwidth without Maxwell 2's color compression. I'd love to see HBM on low-end GPU chips for notebook purposes, but I still think there is plenty of gas left in the tank to wring out of traditional vram and memory controllers before we get there.
 

jpiniero

Lifer
Oct 1, 2010
16,818
7,258
136
Because looking at nVidias projection for gflops/watt with Pascal doesnt leave an impression of that.

I haven't seen any projections; but that wouldn't surprise me. I do expect much higher clock speeds to make the majority of the performance difference with Pascal, and that's going to take a big chunk of the perf/w. Obviously that's going to depend on Pascal clocking that high and that remains to be seen.

So you would have:

Pascal Titan (>= $1099)
Pascal cut slightly with gimped DP ($>= 899)
Full GM200 with 6 GB
"980 Ti"
"980"
"970" (with the memory issue removed but with a higher price)
1408 core GM204
1280 core GM206
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
That may not be enough. Remember more chip designs, more cost. So unless they all buy GP100 for all 4 million professional cards sold. Then you have an issue.

A single 14/16nm design will need to create 1-1.5 billion $ revenue just to pay for itself from the current projections.

Understand what you're saying, but I think you're wrong that they won't have 16nm.

Because they've already said Pascal will be 16nm.


http://www.tweaktown.com/news/45541/nvidia-pascal-first-16nm-gpu-revealed-2016/index.html

"When the press was asked if they had any questions, we asked if Pascal would be the first GPU architecture to be baked onto the 16nm process, or if we would see Maxwell made on 16nm. Jen-Hsun took a few seconds to answer, but he did say that Pascal will be the first architecture on 16nm."

But maybe more to your point, it's extremely unlikely that the entire line will go to 16nm. Probably just some experimental sections.

Going "mostly" 16nm will probably take years due to cost.

I think we will see GM206 features pushed into the rest of the 28nm line. It might even be called Pascal or something but will really be Maxwell V2 architectural concepts (sort of like the idea that Fiji is Hawaii + HBM + compression from Tonga etc).
 

Hi-Fi Man

Senior member
Oct 19, 2013
601
120
106
That's the other part. I don't think it makes sense for Nvidia to implement color compression and DX12 hardware feature specs into GM107 at this point...

nVIDIA GPUs since GeForce FX have had colour compression. Fermi introduced delta colour compression and every uarch after that has just added additional patterns to the already existing delta colour compression.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Understand what you're saying, but I think you're wrong that they won't have 16nm.

Because they've already said Pascal will be 16nm.


http://www.tweaktown.com/news/45541/nvidia-pascal-first-16nm-gpu-revealed-2016/index.html

"When the press was asked if they had any questions, we asked if Pascal would be the first GPU architecture to be baked onto the 16nm process, or if we would see Maxwell made on 16nm. Jen-Hsun took a few seconds to answer, but he did say that Pascal will be the first architecture on 16nm."

But maybe more to your point, it's extremely unlikely that the entire line will go to 16nm. Probably just some experimental sections.

Going "mostly" 16nm will probably take years due to cost.

I think we will see GM206 features pushed into the rest of the 28nm line. It might even be called Pascal or something but will really be Maxwell V2 architectural concepts (sort of like the idea that Fiji is Hawaii + HBM + compression from Tonga etc).

I think it went out of context. It was more that the professional line cant carry a complete lineup on 14/16nm on its own without the consumer segment. And GPUs will still be behind highend smartphones and so on in the node payment race.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
They dont need refresh.Why you think maxwell needs refresh?
we didnt hear single speculation about that so its not gonna happen.
Nv can easilly price GTX980 at 300USD if they want.Its same GPU as GTX970 only not cut-down and GTX970 sell for 300USD already.They will still make tons of money from 300USD gm204.
hawaii is bigger than GM204 and how much 290 cost?

Nvidia will do something. They know having something "new" is much better than just having something. They usually have something compelling for the holiday season.

They still havent filled in all of the 900 series either. What happened to the 950?

Anyway, i am not sure what they will do but i am pretty certain they will have something to hype up for Q4, 2015.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Last time I checked nVidia wasnt in the charity business. And it seems you value wishes beyond economics.

No one said anything about charity. I laid out a clear and specific process by which Nvidia would increase volume, and by doing so, increase profitability and pave the way for higher die size, and even more profitable, FinFET products later on.

You don't think this would work. It's at least possible that you are right. But unless you have some position inside the industry that you're not telling anyone about, you have no better insight into this matter than I do. Only time will tell what happens.

nVidia isnt going to discontinue GM206, just because you want a "big" lowend die. Also the lineup suddenly got an unwanted gap.

Any gap could easily be filled by using cut-down GK204 parts. Note that Nvidia has done this before - the GTX 700 series brought in the first appearance of Maxwell (GM107) and kicked out GK106; instead, the low midrange was filled by GTX 760, which was a GK104 salvage part.

GK106 had a short life but still earned its keep because it sold well. There's no reason the same cannot be true of GM106.

And you forget how cost prohibitive 14/16nm is.

4.png

You always manage to pull out these weird slides whenever discussing FinFET. Who the hell is IBS, and why should we care what they think - especially since this slide dates to 2013, and I'm very skeptical that some investment pundit in 2013 could make accurate predictions at this level about what TSMC, Samsung, and GloFo will be doing in 2016.

Have you considered that one reason why there have been so many delays around FinFET might be specifically because the foundries are trying to get the production to an appealing price point for mass usage?

The key also lies here why Intel is the only company so far getting lower transistor cost on 14nm.

That isn't because Intel has some kind of magical unicorn dust that no one else does. It's because Intel has a lead of several years on the other foundries, so they've already worked out the kinks in the process and gotten yields up.

There is no reason to think this process node will be different than any other. Early adopters always face low yields, higher prices, and die size restrictions. Eventually the process matures, yields go up, larger dice become feasible, and price per transistor goes down. If Intel did it, so can others.

And there is plenty of room for 16nm/14nm FinFET to be viable in certain dGPU products even if per-transistor costs start slightly above that of 28nm and initial die sizes are restricted.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
That may not be enough. Remember more chip designs, more cost. So unless they all buy GP100 for all 4 million professional cards sold. Then you have an issue.

A single 14/16nm design will need to create 1-1.5 billion $ revenue just to pay for itself from the current projections.

Remember that GPUs are quite modular. It's a mistake to think that creating four different Pascal GPUs will cost four times as much as creating one Pascal GPU. Obviously it's not as easy as copying and pasting the blocks, but you don't have to do all the R&D over again.

And where does that $1.0-$1.5 billion "projection" come from? Another random investor trying to get consulting fees?
 
Last edited:

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
I think it went out of context. It was more that the professional line cant carry a complete lineup on 14/16nm on its own without the consumer segment. And GPUs will still be behind highend smartphones and so on in the node payment race.

It's not clear whether this is the case for TSMC, but GloFo looks like they'll be adopting two processes from Samsung: 14LPE (efficiency-focused) and 14LPP (performance-focused). Smartphone/tablet SoCs will want to use 14LPE, so that shouldn't block the production of dGPUs on 14LPP.