[Cadence] Volume production in Q3 of 2015 for high-performance FinFet 16 nm

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
It's all here.

Do you think Nvidia will do a staggered release, just moving Maxwell to 16 nm first in late Q4 2015 and get the "free" 40% boost in performance and then release Pascal in late 2016?

Otherwise, Pascal will become an absolute beast by any historical comparison with a massive jump from 28 nm to 16 nm and on top of that a new architecture, massive new bandwidth, HBM 2 etc etc.

We could then look potentially at 100% improvement between two generations. Has that ever been done before?
 
Feb 19, 2009
10,457
10
76
If you believe TSMC, which on their 20nm struggles with small mobile SoCs, then you really have faith.

It's the same for GloFo & Samsung 14nm ff, claimed production ready was 2014. Where's all our 14nm GPUs?

To this date, the only next-gen node product is the Samsung Exynos SoC. Pretty small, low power. It takes time to scale it up in power & size and even roadmaps from NV suggest late 2016 at the earliest for 16nm ff GPUs, which would be disaster if the first thing they tried to do on a brand new node with a brand new uarch, with brand new memory & stacking/interposer techniques is a big GPU die. Nope.

I'm not expecting major next-node GPUs until well into 2017.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
If they'll stagger it'll likely be in bits - say 950/ti quite fast 'just' using 16nm (just maybe HBM1) and so on up to full scale Pascal at some later stage.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
We will see GP107 first for notebooks and low end before anything else. Just like with Kepler and Maxwell.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Even if we imagine they can make the GPUs. They will still cost significantly more.

Transistor cost goes up after all. Until EUV comes it will just be a cost deathspiral.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
If you believe TSMC, which on their 20nm struggles with small mobile SoCs, then you really have faith.

It's the same for GloFo & Samsung 14nm ff, claimed production ready was 2014. Where's all our 14nm GPUs?

To this date, the only next-gen node product is the Samsung Exynos SoC. Pretty small, low power. It takes time to scale it up in power & size and even roadmaps from NV suggest late 2016 at the earliest for 16nm ff GPUs, which would be disaster if the first thing they tried to do on a brand new node with a brand new uarch, with brand new memory & stacking/interposer techniques is a big GPU die. Nope.

I'm not expecting major next-node GPUs until well into 2017.

finally someone who is not buying the foundry lies and understands the challenges in ramping a 250-300 sq mm high performance GPU with good yields and in high volume along with HBM2 and 2.5D stacking each of which bring their fair share of challenges. :thumbsup:

My predictions

GP104 - late Q3 2016 or Q4 2016 (2 years after GM204) 30-35% faster than GTX Titan-X
GP100 - atleast Q3 2017. 2x perf of Titan-X

I don't think any GPU vendor can even dream of a 500 sq mm FINFET GPU until they have had atleast 9-12 months of FINFET GPU volume production experience with reasonably good yields (>75%) and the invaluable yield learning which you get when you ramp on an immature bleeding edge FINFET process .

GP107 could be an easier chip as it would most likely be <=100 sq mm. GP107 could also use GDDR5 instead of HBM2 and come earlier than GP104. It would be great for low power discrete GPUs without a power connector. Imagine GTX 960 perf for 50W. That will be amazing. :cool:
 
Last edited:

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
nVidia is switching to Samsung.
Nvidia will manufacture Tegra at Samsung in 14nm FinFET. Its a low power SOC process, Tegra is low power SOC.
The rest, ie GPUs, will be produced at TSMC`s 16nm FinFET

If Samsung and GloFo got a 14nm High Power process, I think Nvidia would swtich.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
I don't think any GPU vendor can even dream of a 500 sq mm FINFET GPU until they have had atleast 9-12 months of FINFET GPU volume production experience with reasonably good yields (>75%) and the invaluable yield learning which you get when you ramp on an immature bleeding edge FINFET process .

I think the biggest issue is to find enough people willing to pay the cost. A 500mm2 14/16nm will cost over twice the money of a 28nm for the die just in production cost, while the design cost is 4 times higher. Better uarch design in the cost structure is more important than ever. Using old uarchs on newer nodes(as performance compensation) is just a oneway road to economic suicide.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
It's all here.

Do you think Nvidia will do a staggered release, just moving Maxwell to 16 nm first in late Q4 2015 and get the "free" 40% boost in performance and then release Pascal in late 2016?

Straight up shrinking existing chips to a more advanced node usually does not reap the full benefits of the new node's abilites. Look at GT200 -> GT200B getting only a mild 10% clock boost, only a ~20% decrease in chip size, and no improvement to power usage. Look at G92 getting shrunk to G92b and getting only mild clock boosts with no substantial drop in power usage. On top of that, Maxwell is in a precarious position of basically tapping out it's memory bandwidth capabilities. Higher clock speeds with Maxwell chips need faster vram in order to realize performance scaling, and 7ghz is simply as fast as it gets with GDDR5. So Maxwell can't simply get a shrink and 30% higher clocks and be magically 30% faster.

That said, I think Nvidia will shrink GM107 (or bring out GP107 alongside Maxwell shrinks), GM204, and maybe even GM200 down to 16nm FF for a stop gap. I think this will happen at the end of this year, and we'll get a ~10-15% clock speed improvement from each respective chip while getting maybe 25% smaller die sizes.
 
Last edited:

nvgpu

Senior member
Sep 12, 2014
629
202
81
http://blogs.nvidia.com/blog/2015/03/17/pascal/

NVIDIA&#8217;s Pascal GPU architecture, set to debut next year
Nvidia is done with the Maxwell2 family, they've released it top to bottom, they're not gonna do any shrink of Maxwell2.

The first Pascal product will probably be GP107(speculation, real codename unknown at this time), bringing much desired features like fixed function HEVC hardware decoding, HDMI 2.0 support & Feature Level 12_1 support from GM20x family and possibly DisplayPort 1.3 support(makes it easier to hook up 5K monitor using only a single DP cable instead of the current 2 DP cables required in DP1.2 to drive 5K monitor) to the same 60W TDP level of GM107.

http://www.anandtech.com/show/7764/the-nvidia-geforce-gtx-750-ti-and-gtx-750-review-maxwell

With Maxwell NVIDIA has made the complete transition from top to bottom, and are now designing GPUs bottom-up instead of top-down.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
16,771
7,219
136
Nvidia will manufacture Tegra at Samsung in 14nm FinFET. Its a low power SOC process, Tegra is low power SOC.
The rest, ie GPUs, will be produced at TSMC`s 16nm FinFET

If Samsung and GloFo got a 14nm High Power process, I think Nvidia would swtich.

How high powered do you need anyway? It's a GPU. If the only downside is clock speed, that's not such a problem. And considering the $/transistor problems, it wouldn't be such a bad thing if you get a density boost.

The main benefit of going to SS is that it's available now. Obviously, it'd be too expensive for a big die; but the product I am imagining being released is something slightly faster than Titan X in SP but 1/2 DP. If they want to stay in HPC I can't imagine they can wait for TSMC to get their act together.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Despite what people think, 14nm is actually cheaper than 28nm
20nm 290X would cost 10% more to manufacture than 28nm 290X. The chip would be smaller but the cost of the process eats away the margin you get anyway. With 14nm its a bit cheaper.



eezbRGE.jpg
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
The slide is based on AMDs internal source whatever that may be. It could simply be hopeful estimates because nothing points to why 14nm should be cheaper. No foundry have yet claimed 14nm being cheaper. Including Samsung that is supposed to fab these.

But even then it shows the issue. 40nm 0.9, 28nm 0.5, 14nm 0.4x. Plus the design cost for 14nm is 4 times higher than 28nm. And thats not a small amount anymore. Its fine power consumption wise to shrink a Hawaii chip etc. But its not gonna bring much more than that. To get more performance that we expect (2x), and someone gonna pay up for that. Its hard to imagine GPUs(And CPUs for that matter) not getting significant price increases if they are to stay the same size wise.
 
Last edited:

digitaldurandal

Golden Member
Dec 3, 2009
1,828
0
76
How high powered do you need anyway? It's a GPU. If the only downside is clock speed, that's not such a problem. And considering the $/transistor problems, it wouldn't be such a bad thing if you get a density boost.

The main benefit of going to SS is that it's available now. Obviously, it'd be too expensive for a big die; but the product I am imagining being released is something slightly faster than Titan X in SP but 1/2 DP. If they want to stay in HPC I can't imagine they can wait for TSMC to get their act together.

Well GPUs are like 100x more power hungry than many SoCs
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
Despite what people think, 14nm is actually cheaper than 28nm
20nm 290X would cost 10% more to manufacture than 28nm 290X. The chip would be smaller but the cost of the process eats away the margin you get anyway. With 14nm its a bit cheaper.



eezbRGE.jpg

Ironically the slide is captioned

Economics of process scaling : Diminishing returns

btw that slide means nothing since they are not mentioning yields at 16/14nm and 28nm and how the cost per transistor was calculated. Yield is the primary determing factor for a bleeding edge semiconductor process. It determines cost of manufacturing and the economics of high volume production. If instead that slide also provided a yield /chip area curve at 28, 16/14nm then its worth discussing.

Anyway so we see the actual cost per transistor reduction rate has fallen badly when you compare 45nm to 32nm and 40nm to 28nm with 28nm to 14nm. Also that slide does not talk about the 4x design costs at 16/14nm compared to 28nm. The problem is the economics of high volume manufacturing are severely affected by this slowdown in cost reduction. For high margin products it makes sense to move to 16/14nm.

AMD CTO Mark Papermaster said they expect to transition to FINFET in 2016 (more likely H2 2016) because thats when they expect the technology to be robust. Thats an indirect hint that yields are expected to be a challenge early on.

In 2016 16/14nm will be used where the fabless company can make high margins. Apple A9/Apple A9X, Snapdragon 820, AMD Zen, AMD and Nvidia high end GPUs. Thats why you will not see 14nm being used in low cost low margin chips till 2017. The foundries expect 28nm to remain the highest volume node in terms of wafers shipped in 2016. 16/14nm will become the highest volume node in 2017.
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
I know I should be blown away by this technology but it seems like stuff is just slowing down and less impressive than things used to be. Hopefully quantum computing will come along and disrupt this boring pace.
 

videogames101

Diamond Member
Aug 24, 2005
6,783
27
91
I know I should be blown away by this technology but it seems like stuff is just slowing down and less impressive than things used to be. Hopefully quantum computing will come along and disrupt this boring pace.

Maybe you should come up with something disruptive then ;)

Because the current pace sure isn't from lack of trying.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
I know I should be blown away by this technology but it seems like stuff is just slowing down and less impressive than things used to be. Hopefully quantum computing will come along and disrupt this boring pace.

Its not really slowing down. Its just a cost issue for the lower products.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
One thing to remember with the economics is that dGPU's are still (just about ;)) a competitive market.

With 20nm? Yes put off by the cost, but also the gains were low enough that even if the other side did jump to it they could have competed with 28nm stuff.

With 16/4nm? Huge performance gulf vs 28nm. So plenty of people liable to upgrade when it first comes out and giving the other side any real length of time uncontested could do real damage.
(Anything like the ~10 months the 970/80 got this time would be entirely devastating.).

Not that I'm expecting AMD/NV to do anything crazy time scale wise, but there is also definite pressure on both of them not to dally too long this time.

You could definitely see them then sticking at 16/4nm and derivatives for quite some time. Moving down only when the performance difference is so large they basically 'have' to soak up the costs and do it or abandon the market.
 

NTMBK

Lifer
Nov 14, 2011
10,438
5,787
136
Nvidia is done with the Maxwell2 family, they've released it top to bottom, they're not gonna do any shrink of Maxwell2.

Pascal is the shrink of Maxwell. How do you think it appeared so suddenly on the roadmaps between Maxwell and Volta?