NVIDIA Pascal Thread

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DooKey

Golden Member
Nov 9, 2005
1,811
458
136
Ignoring that it's irrelevant to this thread you're still terrible at this, do you even read what you link?

"Project F is yet another codename for the chip designed at AMD HQ. What we’ve learned from previous LinkedIn leaks is that just because something is designed and reaches engineering state, it does not mean it will end up as a product for end-users. Project F could be one of such designs."

Even without that disclaimer there is no logical link between that article and what you wrote, nothing would imply that's the only chip/card they will or will not release this year. Great job as usual.

He's just as bad as the AMD pimps around here.
 
Mar 10, 2006
11,715
2,012
126
Ignoring that it's irrelevant to this thread you're still terrible at this, do you even read what you link?

"Project F is yet another codename for the chip designed at AMD HQ. What we’ve learned from previous LinkedIn leaks is that just because something is designed and reaches engineering state, it does not mean it will end up as a product for end-users. Project F could be one of such designs."

Even without that disclaimer there is no logical link between that article and what you wrote, nothing would imply that's the only chip/card they will or will not release this year. Great job as usual.

AMD has said that two Polaris chips are coming this year, this is probably Polaris 11.

That said, no reason to assume it is not going to be a good chip just because of that die size. That's ~450mm^2 worth of 28nm transistors, and those transistors are higher performance than the 28nm ones by quite a lot.
 

jpiniero

Lifer
Oct 1, 2010
14,590
5,214
136
It would be in no man's land performance wise because salvaged GP100 chips can easily slot into that if need be. GP102 doesn't and will never exist unless GP100 is not EVER going to get a consumer GTX release.

You're missing the part where it costs $1200+. The GP102 would slot in the price bracket of the GM200 ($700-$1000)

And yes there are enough people who would pay $1200+ for a GPU.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
You're missing the part where it costs $1200+. The GP102 would slot in the price bracket of the GM200 ($700-$1000)

And yes there are enough people who would pay $1200+ for a GPU.

No, I'm not missing the point. You are missing the point. If finfet is as tough to manufacture on as everyone says, a 500mm2 chip is going to have yields all over the chart, meaning that on top of a $1200 full enabled sku, there will be plenty of defective dies that can slot in exactly where a GP102 would otherwise exist at the $750 bracket. Hence, there is would be no point to the GP102.
 

jpiniero

Lifer
Oct 1, 2010
14,590
5,214
136
If finfet is as tough to manufacture on as everyone says, a 500mm2 chip is going to have yields all over the chart, meaning that on top of a $1200 full enabled sku, there will be plenty of defective dies that can slot in exactly where a GP102 would otherwise exist at the $750 bracket. Hence, there is would be no point to the GP102.

The more I think about it, I am actually leaning torwards the Titan being a cut die. After all, the original Titan was a cut die. I think there would be a big enough performance gap between a cut GP100 and full GP102 to make it work.

So you would have something like this:

cut GP100 Titan - $1500
full GP102 - $999
cut GP102 - $749
full GP104 - $549
cut GP104 - $399

If GP100 yields are really that bad, nVidia could sell mobile GP100 models. With the full DP enabled there could be a market for it if the TDP makes it feasible.
 
Feb 19, 2009
10,457
10
76
There hasn't been a precedent for a chip between mid-range and high-end, why would they do that for Pascal?

Big chip, low yields, lots of harvesting is required, thus the gap between mid to high will be covered and profits per wafer are maximized.

Building an in-between GP102 chip negates the opportunity for the above approach and would hurt profits.
 
Mar 10, 2006
11,715
2,012
126
There hasn't been a precedent for a chip between mid-range and high-end, why would they do that for Pascal?

Knights Landing forcing NV to build something crazy at the high end, making them want to build something more appropriate/smaller for high end gaming?
 
Feb 19, 2009
10,457
10
76
Knights Landing forcing NV to build something crazy at the high end, making them want to build something more appropriate/smaller for high end gaming?

They do have the $$ for it. A compute beast GP100 for Teslas, and a compute-stripped gaming GP102 is possible in theory.

The argument against separation of uarch is just cost.
 
Mar 10, 2006
11,715
2,012
126
They do have the $$ for it. A compute beast GP100 for Teslas, and a compute-stripped gaming GP102 is possible in theory.

The argument against separation of uarch is just cost.

Agree, if the company can afford the R&D, it's better to have targeted products rather than one-size-fits-all.
 

MrTeal

Diamond Member
Dec 7, 2003
3,569
1,699
136
This is all complete baseless speculation, especially since no one even knows if a GP102 will exist, but a theoretical GP102 might not even have worse gaming performance performance than a GP100. Stripped of DP compute and NVLink, it could be a smaller die with the same or only slightly fewer shaders, but the ability to clock higher and with lower power consumption.

Something along the lines of GP100 being 550mm² DP monster, GP104 being a 300mm² gaming GPU, and GP102 being a 450mm² gaming GPU. If that was the case though, I wouldn't see GP102 coming out until well into 2017.
 

jpiniero

Lifer
Oct 1, 2010
14,590
5,214
136
Knights Landing forcing NV to build something crazy at the high end, making them want to build something more appropriate/smaller for high end gaming?

Yeah, Knights Landing is playing a part of it. I would think, given the cost problems of 16FF, they would rather start with 350-400 mm2 range for GP100 and then sell a bigger die later even if they have to charge more for it. That's why I didn't think they would do a 550 mm2 monster die for GP100 but it seems like they are.

I would not be surprised if nVidia sold a 16FF 550 mm2 die with gimped DP later, and a 550 mm2 DP monster with no render outputs. That despite the obvious cost problems of doing it.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
The more I think about it, I am actually leaning torwards the Titan being a cut die. After all, the original Titan was a cut die. I think there would be a big enough performance gap between a cut GP100 and full GP102 to make it work.

So you would have something like this:

cut GP100 Titan - $1500
full GP102 - $999
cut GP102 - $749
full GP104 - $549
cut GP104 - $399

If GP100 yields are really that bad, nVidia could sell mobile GP100 models. With the full DP enabled there could be a market for it if the TDP makes it feasible.

I am going to have to agree to disagree with you, as we are nowhere close as to expectations and Nvidia's product strategy.

There hasn't been a precedent for a chip between mid-range and high-end, why would they do that for Pascal?

Big chip, low yields, lots of harvesting is required, thus the gap between mid to high will be covered and profits per wafer are maximized.

Building an in-between GP102 chip negates the opportunity for the above approach and would hurt profits.

Exactly. Costly R&D + low yields makes no sense to develop GP102 when harvested GP100 dies can fit the role (if necessary) quite well.
 

jpiniero

Lifer
Oct 1, 2010
14,590
5,214
136
I am going to have to agree to disagree with you, as we are nowhere close as to expectations and Nvidia's product strategy.

I suppose it's possible that GP104 is what I am calling GP102 in terms of performance and price range. But that would just be semantics in terms of model names. Everything publically said has made it pretty clear perf/$ for this node will suck and you should set your expectations accordingly.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Everything publically said has made it pretty clear perf/$ for this node will suck and you should set your expectations accordingly.

I have seen no solid reason to think this. I expect perf/$ to improve over 28nm. Will it improve less than on previous node shrinks? Maybe. But the bottom line is still that buyers will be getting a boost in performance at the same price, and at massively greater power efficiency.

If Nvidia releases a Pascal chip (probably ~350mm^2) with 4096 shaders, 128 ROPs, and a 256-bit GDDR5X bus, it will probably surpass GTX 980 Ti by over 50%, and do it at 180-190W. If they then price the card at $699, that would be less than a 8% price increase over GTX 980 Ti. Even $799 would only be 23% more expensive. >50% more performance for 8%-23% more dollars would be a very good showing. And Nvidia would still be raking in the dough, probably showing bigger per-unit profits than in previous generations. And as you noted, whether they call this chip GP104, as I expect, or GP102 as you think, ultimately doesn't matter much.
 
Feb 19, 2009
10,457
10
76
Ofc you get more perf/$, that's the entire point of node shrinks, because a die half the size should be delivering similar performance.

Rather, there's unlikely to be better prices for each segment.

It's been creeping up, mid-range used to be $200, now it's $330-499.

Who's expecting GP104 to be $330 like the GM204 970 was? ;)
 

MrTeal

Diamond Member
Dec 7, 2003
3,569
1,699
136
I have seen no solid reason to think this. I expect perf/$ to improve over 28nm. Will it improve less than on previous node shrinks? Maybe. But the bottom line is still that buyers will be getting a boost in performance at the same price, and at massively greater power efficiency.

If Nvidia releases a Pascal chip (probably ~350mm^2) with 4096 shaders, 128 ROPs, and a 256-bit GDDR5X bus, it will probably surpass GTX 980 Ti by over 50%, and do it at 180-190W. If they then price the card at $699, that would be less than a 8% price increase over GTX 980 Ti. Even $799 would only be 23% more expensive. >50% more performance for 8%-23% more dollars would be a very good showing. And Nvidia would still be raking in the dough, probably showing bigger per-unit profits than in previous generations. And as you noted, whether they call this chip GP104, as I expect, or GP102 as you think, ultimately doesn't matter much.

Wait, performance/$ increase at the last node shrink? Maybe eventually, but at launch you had a 7970 costing close to 60% more than a 6970, and giving 30% better performance. By the time the 7870 launched it had at least comparable performance and perf/$ to the 6970 while the 7850 was about the same as the (non-shader unlocked) 6950.
Even when the GTX680 launched its perf/$ wasn't any better than the GTX580. Perf/W increased by a huge amount, but perf/$ didn't change much at all.

Let's say AMD does a lot better than they showed with Polaris 10. Its performance on today's games exceeds not only the 950 that it was demoed against, but also the 960 and ends up being as fast as a 380. Does anyone really think you'll be able to go out close to launch and buy one of those cards for the $165 that you can get a 4GB R9 380 for right now? Let alone under the $150 mark that it would need to be in order to give an appreciable increase in perf/$?
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
ShintaiDK, can we get clarification? Is $2500 the cost to foundry customers or the manufacturing cost that TSMC incurs?

TSMC28nmwafercapacityanalysis.png


5000$+ is what they pay for 16nm I assume.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Thanks for that. FF has got to be more than $5000 if all the transistor per $ doom and gloom is correct, otherwise it's mirroring 28nm in cost right now and will be significantly cheaper per transistor than 28nm by this time next year.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Thanks for that. FF has got to be more than $5000 if all the transistor per $ doom and gloom is correct, otherwise it's mirroring 28nm in cost right now and will be significantly cheaper per transistor than 28nm by this time next year.

The initial price was above 15000$. 5000-6000$ today sounds right. And I doubt it will ever go below 4000$. Also remember wafer cost isn't the only issue. Design cost, gate utilization etc.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
It might be more truthful to say "I don't know"

You could use that as an argument until one posted the daily quote for a specific foundry customer.

14/16nm cost is a sore topic, I get that.

The 28nm prices linked isn't correct either on a per customer base. Specially not with different complexity of an IC design.
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Would be interesting to see if Nvidia completely segments their HPC and consumer lines. I, personally, don't see the benefit, but I'm also not a multi-million dollar corporation.

You need to do something with those harvested dies. And GPU Consumers already showed they'll line up to pay up to $1,000 for them.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
With DX-12 games in 2016, I believe NVIDIA will use a singe architecture for both HPC and consumer this time.