Who thinks Maxwell is getting a rebrand/rebadge?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

nvgpu

Senior member
Sep 12, 2014
629
202
81
http://videocardz.com/56981/nvidia-readying-quadro-m5000-and-quadro-m4000

Nvidia probably will not release anymore Maxwell2 based products except for Quadro Maxwell to replace Quadro Kepler cards.

Nvidia should concentrate all efforts on Pascal and bring features that people want from GM206 to the entire Pascal product family top to bottom, things like fixed function HEVC hardware decoding & HDCP 2.2 support. GM200 and GM204 doesn't support fixed function HEVC hardware decoding.

http://www.anandtech.com/show/8533/...rt-13-standard-50-more-bandwidth-new-features

New features that are not in Maxwell2 like DisplayPort 1.3 hopefully will be in Pascal also, making life easier because you just need to connect 1 DP cable to drive a 5K monitor instead of 2 DP cable with DP 1.2.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
You always manage to pull out these weird slides whenever discussing FinFET. Who the hell is IBS, and why should we care what they think - especially since this slide dates to 2013, and I'm very skeptical that some investment pundit in 2013 could make accurate predictions at this level about what TSMC, Samsung, and GloFo will be doing in 2016.

Have you considered that one reason why there have been so many delays around FinFET might be specifically because the foundries are trying to get the production to an appealing price point for mass usage?

Its well known and respected. Even Samsung uses it for their own cost projections.

http://www.ibs-inc.net/#!about-us

That isn't because Intel has some kind of magical unicorn dust that no one else does. It's because Intel has a lead of several years on the other foundries, so they've already worked out the kinks in the process and gotten yields up.

There is no reason to think this process node will be different than any other. Early adopters always face low yields, higher prices, and die size restrictions. Eventually the process matures, yields go up, larger dice become feasible, and price per transistor goes down. If Intel did it, so can others.

No its due to design cost, time and volume to pay it back revenue wise. Something very few companies can afford. AMD and nVidia isnt one of them.

And there is plenty of room for 16nm/14nm FinFET to be viable in certain dGPU products even if per-transistor costs start slightly above that of 28nm and initial die sizes are restricted.

Unless you increase cost, you just get the same GPUs shrinked at best. Remember design cost is also 4x higher than 28nm. Even if you just do a shrink.

11635d1406145622-sfdsoi2-jpg
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Remember that GPUs are quite modular. It's a mistake to think that creating four different Pascal GPUs will cost four times as much as creating one Pascal GPU. Obviously it's not as easy as copying and pasting the blocks, but you don't have to do all the R&D over again.

And where does that $1.0-$1.5 billion "projection" come from? Another random investor trying to get consulting fees?

You forget they all need their own 14/16nm mask set.

It's not clear whether this is the case for TSMC, but GloFo looks like they'll be adopting two processes from Samsung: 14LPE (efficiency-focused) and 14LPP (performance-focused). Smartphone/tablet SoCs will want to use 14LPE, so that shouldn't block the production of dGPUs on 14LPP.

Its made on the exact same equipment. So it doesnt matter. What matters is who will pay most for the limited supply.
 
Feb 19, 2009
10,457
10
76
Most definitely. Next-gen is a massive leap, new uarch, new node, HBM2! Even mid-range Pascal should demolish GM200.

Only problem is HBM2 not scheduled for volume production until Q3 2016. You guys can do the math when we'll get consumer GPUs with it. :)
 

Head1985

Golden Member
Jul 8, 2014
1,867
699
136
I think pascal GP204 will be here in same time when GM204 GTX970/980 was launched.
In year and 3 months and Gp204 will destroy TITANX with +40-50% performance.Big pascal will be another +50% over GP 204 and comes later.
 
Last edited:

happy medium

Lifer
Jun 8, 2003
14,387
480
126
Happy I hope you are right, but I do not believe. :(

I think I will be. I can imagine both AMD and Nvidia can't wait to bring out gpu's at a smaller node. They will both rush them to market ASAP.
I also think it will be the largest leap in performance since the 8800gtx.
Easily 65%.

By September/Oct next year, I'm guessing.
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I'm with TViceman on this one. I see exactly what he does. Full GM200 with 6GBs launched as new flag ship (1080 Ti). Current 980 Ti takes 1080 slot, and one further cut down version takes 1070 with GM204 with tweaks fills out the bottom.

Prices don't move for new cards (ie 1080ti is $650, 1080 $500, 1070 $320?).

Pascal if it is going to wait for HBM2 is a long way off. But I wouldn't be surprised to see another series of cards even before that, probably using a low end Pascal with HBM1? Just to get the kinks out, ie like Maxwell1 to Maxwell2.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
yeah and GTX970/980 are not even 1 year old so why rebrand them?When pascal launch they will be 2 year old.

That time frame for Pascal is a best case scenario. GTX 680 was only 13 months old when GTX 780 launched. GTX 480 was only 8 months old when GTX 580 launched. It would be really, really unusual for Nvidia to stick with the same exact product lineup and names for 2 years.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
That time frame for Pascal is a best case scenario. GTX 680 was only 13 months old when GTX 780 launched. GTX 480 was only 8 months old when GTX 580 launched. It would be really, really unusual for Nvidia to stick with the same exact product lineup and names for 2 years.

But 780 was 18 months old when 980 launched. The market lifespan of GPUs has been going up.

I really wouldn't expect Pascal / HBM to show up in Nvidia's entire lineup. Lots of reasons, but mostly I think it comes down to cost of both HBM and 16nm as Shin keeps mentioning.

Nvidia's Maxwell lineup is now a bit fractured.

The v1 models are missing 4k decode in hw - the 960 can actually get 150FPS on 4k decode vs ~30fps with a fast cpu and partial hardware assist from the v1 Maxwells (750 / 970 / 980). That doesn't make sense, a $200 card having that kind of advantage over $300-$650 cards.

Then you've got texture compression on v2, which can reduce needed bus size and manufacturing cost. Note the 960 outperforms the 760, with half the bus size.

I really don't think we will see Maxwell disappear in the next 18 months. V1 might disappear, with the 970/980 being updated to the V2 feature set and moved in the lineup. That wouldn't be a rebadge though, add texture compression and 4k decode and they'll be quite different.

750/750 Ti is 18 months old so I would look there for a change soon. Maybe cut down 960's coming into that segment.

Also of note, 750/750Ti prices have cratered in the last 3 months, 20-40% below what they were in January. Could be market demand dying, could also be supply chain clearing out.
 

werepossum

Elite Member
Jul 10, 2006
29,873
463
126
There are cheaper high-capacity GDDR5 chips becoming available- the same memory chips that are on the R9 3XX series- and I suspect that NVidia will use these to boost the memory capacity of their lineup in a refresh. 8GB 980 refresh, maybe even a 24GB Titan X Black (or whatever they call it).
Good point. With AMD's relatively lackluster release NVidia has a lot of options, but with memory prices coming down one attractive option would be higher VRAM versions at higher prices (with higher margins) probably coexisting with the existing GTX970/980 family. NVidia has some experience now with their new compression scheme, so it's possible that minor tweaks, higher clocks, and more VRAM the 970 & 980 can erase their relative weakness at high resolutions. I do suspect we'll see something new (or rather, newish) by December though in the GTX960 & 750 market.

No one said anything about charity. I laid out a clear and specific process by which Nvidia would increase volume, and by doing so, increase profitability and pave the way for higher die size, and even more profitable, FinFET products later on.

You don't think this would work. It's at least possible that you are right. But unless you have some position inside the industry that you're not telling anyone about, you have no better insight into this matter than I do. Only time will tell what happens.

Any gap could easily be filled by using cut-down GK204 parts. Note that Nvidia has done this before - the GTX 700 series brought in the first appearance of Maxwell (GM107) and kicked out GK106; instead, the low midrange was filled by GTX 760, which was a GK104 salvage part.

GK106 had a short life but still earned its keep because it sold well. There's no reason the same cannot be true of GM106.

You always manage to pull out these weird slides whenever discussing FinFET. Who the hell is IBS, and why should we care what they think - especially since this slide dates to 2013, and I'm very skeptical that some investment pundit in 2013 could make accurate predictions at this level about what TSMC, Samsung, and GloFo will be doing in 2016.

Have you considered that one reason why there have been so many delays around FinFET might be specifically because the foundries are trying to get the production to an appealing price point for mass usage?

That isn't because Intel has some kind of magical unicorn dust that no one else does. It's because Intel has a lead of several years on the other foundries, so they've already worked out the kinks in the process and gotten yields up.

There is no reason to think this process node will be different than any other. Early adopters always face low yields, higher prices, and die size restrictions. Eventually the process matures, yields go up, larger dice become feasible, and price per transistor goes down. If Intel did it, so can others.

And there is plenty of room for 16nm/14nm FinFET to be viable in certain dGPU products even if per-transistor costs start slightly above that of 28nm and initial die sizes are restricted.
Good points. International Business Strategies is a well-respected player in projecting electronics costs, including foundry yields and costs. But I doubt their 2013 slides are in their current presentations, so YMMV.

But 780 was 18 months old when 980 launched. The market lifespan of GPUs has been going up.

I really wouldn't expect Pascal / HBM to show up in Nvidia's entire lineup. Lots of reasons, but mostly I think it comes down to cost of both HBM and 16nm as Shin keeps mentioning.

Nvidia's Maxwell lineup is now a bit fractured.

The v1 models are missing 4k decode in hw - the 960 can actually get 150FPS on 4k decode vs ~30fps with a fast cpu and partial hardware assist from the v1 Maxwells (750 / 970 / 980). That doesn't make sense, a $200 card having that kind of advantage over $300-$650 cards.

Then you've got texture compression on v2, which can reduce needed bus size and manufacturing cost. Note the 960 outperforms the 760, with half the bus size.

I really don't think we will see Maxwell disappear in the next 18 months. V1 might disappear, with the 970/980 being updated to the V2 feature set and moved in the lineup. That wouldn't be a rebadge though, add texture compression and 4k decode and they'll be quite different.

750/750 Ti is 18 months old so I would look there for a change soon. Maybe cut down 960's coming into that segment.

Also of note, 750/750Ti prices have cratered in the last 3 months, 20-40% below what they were in January. Could be market demand dying, could also be supply chain clearing out.
Also good points. Minor tweaks for NVidia could result in significant margin increases.