[Kitguru]Nvidia`s big Pascal GP100 have taped out - Q1 2016 release

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

maddie

Diamond Member
Jul 18, 2010
4,877
4,948
136
Yea. and seems like it is rendered. The HBM is a piss poor paint copy/paste.

Lifelike rendering then.

pascal.jpg

I guess we have gone past this.

‘Fermi’ end plate closeup
Fermi_end_plate_cropped.jpg
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
GP204 obviously comes first, and when the yields get better, GP200 comes. The process simply had too many years to get mature, is about time the 16FF+ gets done.

AMD and Nvidia didn't shrunk to any newer node these times because there don't have any newer high performance node to shrunk. They were simply using the best node they can use.

I really don't know if GF14LPP can be used to high performance devices, but TSMC16FF+ surely can. Link: http://community.cadence.com/cadenc...instream-designers-and-internet-of-things-iot

Who told you GP104 comes first? Nvidia used to make big dies first with Fermi.

Yields with 16nm may be better than 28nm at the same stage we are in now. Who knows
 

Abwx

Lifer
Apr 2, 2011
11,514
4,301
136
Yea. and seems like it is rendered. The HBM is a piss poor paint copy/paste.

In the pic of the CEO the two pairs of alleged HBM chips are not even aligned accurately, they just stuck some random dies with a GPU, and the green support is not even an interposer, it s a classic glass fiber PCB.

Must really be the panic at Nvidia headquarters...
 

maddie

Diamond Member
Jul 18, 2010
4,877
4,948
136


Something is strange with the dates.


TSMC’s comprehensive 16FF+ design ecosystem supports a wide variety of EDA tools and hundreds of process design kits with more than 100 IPs, all of which have been silicon validated. Backed by the resources of the biggest design ecosystem in the industry, TSMC and its customers are starting intensive design engagements, paving the way for future product tape-outs, pilot activities and early sampling.

The 16FF+ process is on track to pass full reliability qualification later in November, and nearly 60 customer designs are currently scheduled to tape out by the end of 2015. Due to rapid progress in yield and performance, TSMC anticipates 16FF+ volume ramp will begin around July in 2015.


1)Article date: 2014/11/12

2) Now starting design engagements

" TSMC and its customers are starting intensive design engagements, paving the way for future product tape-outs, pilot activities and early sampling."

3) The following year you have 60 designs ready to be taped out?

"The 16FF+ process is on track to pass full reliability qualification later in November, and nearly 60 customer designs are currently scheduled to tape out by the end of 2015"

4) You tape out at the end of 2015 but begin volume production in July 2015.

" TSMC anticipates 16FF+ volume ramp will begin around July in 2015."

I suggest a mistake was made and the volume production really begins in July 2016. This assumes TSMC has no problems with their schedule.
 
Last edited:

.vodka

Golden Member
Dec 5, 2014
1,203
1,537
136
I seriously doubt they're gonna pull off more than a node shrink (28->16), a new architecture and a new memory technology all at once and without delays. Last time nV tried to advance on all three fronts with 40nm+Fermi+GDDR5, well... Q1 2016 is overly optimistic IMO. Can TSMC be trusted to be on time with their nodes after their performance? (Not that the rest of the foundry business isn't having it any easier to develop newer nodes)


But who knows, maybe there isn't a need for >500mm2 dies with such a fabrication advance, GP100 could be sized around GM204 or less and still destroy everything available right now. The mockup (what else could it be?) that JHH is holding doesn't seem to have a monster chip like GM200 in size.

Still, knowing nV they wouldn't suddenly go down on die size for the high end after making huge chips for five generations in a row (G80, GT200, GF100/110, GK110, GM200). Interesting times ahead.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I haven't read much public info regarding TSMC's 16nm, but since they sucked for the initial 40nm & 28nm, blew donkey on 20nm.. we're somehow supposed to believe they can manage BIG Pascal, with a new uarch, new ram, stacking all in one on an unproven node... no. Tell 'em they dreaming.

Almost everyone who has paid attention will know NV needs to refresh their Tesla stack due to Maxwell's neutered DP compute, so the first big Pascal going to Tesla is a no brainer.

You can't rule out the possibility. I keep saying how this will be a stop gap generation but many don't want to believe it because I guess no one wants to admit that 980Ti and Fiji XT will be superceded in possibly just slightly more than a year. Because of 28nm node, we can't look at this gen as indicative of normal timelines since everything got shifted "late" due to 28nm. Normally we would have had a 14nm/16nm chip 2-2.5 years from HD7970's release. That means 980Ti is about 1.5 years too late because of 28nm node.

NV already publicly stated that they are moving the launch of Volta forward:

"At a press conference in Tokyo, Japan, Nvidia revealed that it would release its graphics processing units powered by the “Volta” architecture in 2017, one year after the first GPUs featuring the “Pascal” architecture will see the light of the day, 4Gamer.net reports. Nvidia did not reveal the difference between “Pascal” and “Volta”, but one of the features the latter is expected to support is second-generation NVLink with 80GB/s – 200GB/s bandwidth."
http://www.kitguru.net/components/graphic-cards/anton-shilov/nvidia-to-speed-up-development-of-graphics-processing-architectures/

If early Volta is possibly Late 2017, Pascal needs to come out sooner to have a 2 year architectural life. All of this is speculation but since NV publicly committed to a 2017 Volta launch, and they even stated 1 year after Pascal, it seems NV is trying to adopt a faster roll-out of newer architecture. Will they pull it off? We don't know but some of what you guys are saying contradicts NV's own timelines (unless NV is just spinning hype).

Until new info comes out, I am still going to stick to my prediction that this is a stop-gap 28nm gen and both NV and AMD will have a faster single chip card in 2H of 2016 on a new node.
 

Despoiler

Golden Member
Nov 10, 2007
1,967
772
136
They aren't getting a Q1 release unless SK Hynix is a quarter ahead on their HBM2 progress.
 

flopper

Senior member
Dec 16, 2005
739
19
76
Until new info comes out, I am still going to stick to my prediction that this is a stop-gap 28nm gen and both NV and AMD will have a faster single chip card in 2H of 2016 on a new node.

obviously.
HBM2 on AMD Fury 14/16nm will be so Godlike.
Cant wait for it.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,110
1,260
126
Yeah that board being held up is obviously a mock up for marketing. That and the other information.

Looks like another junk information clickbait article.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
They should had better not tempted such a scam as it s aknowledgment that they are in full panic mode..

http://semiaccurate.com/forums/showpost.php?p=237272&postcount=1120

100% accurate and reliable.
And what stops Nvidia from using HBM1 if HBM2 isnt available for Nvidia until mid/late 2016?

That stops his assumption that Pascal will use GDDR5 right there. He even says "GP104". Nobody said GP104 launch first.
The source of the kitguru article says GP100 coming in Q1, not GP104. He doesnt even specify what HBM it will use
 

geoxile

Senior member
Sep 23, 2014
327
25
91
And what stops Nvidia from using HBM1 if HBM2 isnt available for Nvidia until mid/late 2016?

That stops his assumption that Pascal will use GDDR5 right there. He even says "GP104". Nobody said GP104 launch first.
The source of the kitguru article says GP100 coming in Q1, not GP104. He doesnt even says what HBM it will use

Big Pascal with 4GB of HBM1 in 2016? People should be losing their minds over how terrible that decision is, considering how much flak Fiji is getting for 4GB HBM1 right now.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Big Pascal with 4GB of HBM1 in 2016? People should be losing their minds over how terrible that decision is, considering how much flak Fiji is getting for 4GB HBM1 right now.
Not if AMD`s memory optimizations for HBM over GDDR5 turns out true. They say they have the solution for it. That is something I`m very interested to learn about

But I cant help but agree with you that 6/8GB would be nice though. Maybe they`ll do more stacks with GP100?
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Not if AMD`s memory optimizations for HBM over GDDR5 turns out true. They say they have the solution for it. That is something I`m very interested to learn about

But I cant help but agree with you that 6/8GB would be nice though. Maybe they`ll do more stacks with GP100?

The "memory optimizations" is an attempt to upsell 4GB because they have no other solution. I can guarantee you it wasnt the plan to ship it with 4GB. But since Hynix couldnt deliver its how it ended.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
With HBM1, 4 stacks, 4GB.
Are there any specifications available anywhere that says 4 stacks is the limit with HBM1? I understand 4x1GB and 4096bit, but what stops them from adding 8x1GB and doing 8192bit if you disregard the obvious price increase etc? From a technical point of view I mean
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,979
589
126
The "memory optimizations" is an attempt to upsell 4GB because they have no other solution. I can guarantee you it wasnt the plan to ship it with 4GB. But since Hynix couldnt deliver its how it ended.
We'll see when actual benches and tests are done. And no you can't guarantee anything you are not the engineers working on the tech.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Are there any specifications available anywhere that says 4 stacks is the limit with HBM1? I understand 4x1GB and 4096bit, but what stops them from adding 8x1GB and doing 8192bit if you disregard the obvious price increase etc? From a technical point of view I mean

From a technical point of view its not a problem.
 

maddie

Diamond Member
Jul 18, 2010
4,877
4,948
136
And what stops Nvidia from using HBM1 if HBM2 isnt available for Nvidia until mid/late 2016?

That stops his assumption that Pascal will use GDDR5 right there. He even says "GP104". Nobody said GP104 launch first.
The source of the kitguru article says GP100 coming in Q1, not GP104. He doesnt even specify what HBM it will use

I see you ignored my post and still maintaining this as true. Use you analytical thinking.



POSTED
1)Article date: 2014/11/12

2) Now starting design engagements

" TSMC and its customers are starting intensive design engagements, paving the way for future product tape-outs, pilot activities and early sampling."

3) The following year you have 60 designs ready to be taped out.

"The 16FF+ process is on track to pass full reliability qualification later in November, and nearly 60 customer designs are currently scheduled to tape out by the end of 2015"


4) You tape out at the end of 2015 but begin volume production in July 2015?

" TSMC anticipates 16FF+ volume ramp will begin around July in 2015."


I suggest a mistake was made and the volume production really begins in July 2016. This assumes TSMC has no problems with their schedule.

These are incompatible dates

Nvidia spreading FUD
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
I see you ignored my post and still maintaining this as true. Use you analytical thinking.

snip

Kitguru (April 2015 article)

lthough Taiwan Semiconductor Manufacturing Co. has delayed mass production of chips using its 16nm fabrication processes, this did not happen only because of low yields. According to the company, 16nm yields at TSMC are approaching mature levels. This year TSMC will offer two 16nm process technologies for clients: 16nm FinFET (CLN16FF) and 16nm FinFET+ (CLN16FF+).
CLN16FF+ yield is already approaching CLN20SoC yield (which is mature enough to use for commercial products), according to a Cadence blog post. The VP reportedly said that the 16FF+ provided better maturity at risk production than any previous TSMC process. TSMC has received over 12 CLN16FF+ tape outs so far and expects more than 50 product tape outs this year. High-volume production will begin in the third quarter, with meaningful revenue contribution starting in the Q4 2015.
Fudzilla (April 2015 article):

TSMC’s 16nm FinFET node (16FF) is already online, but the improved 16nm FinFET Plus (16FF+) node should be available soon as well. The company confirmed 16FF+ will enter volume production in mid-2015, roughly three months from now. (July 2015, Cloudfire)
TL;DR
Companies have already taped out products in 16nm from TSMC, 12 products so far (Nvidia is probably one of them since the OP of this thread say GP100 have taped out).
Volume production beginning in July next month

Nvidia Pascal GP100 and 16nm FinFET in early 2016 is most likely very legit ;)
 
Last edited:

maddie

Diamond Member
Jul 18, 2010
4,877
4,948
136
I didnt know nVidia owned kitguru.

Kitguru is merely the intermediary.

Kitguru writes:

"An anonymous person presumably with access to confidential information in the semiconductor industry revealed in a post over at Beyond3D forums that Nvidia had already taped out its next-generation code-named GP100 graphics processing unit. Nowadays, a tape-out means that the design of an integrated circuit has been finalized, but the first actual chips materialize only months after their tape-out."

Did you even read the message?

An anonymous person
presumably with access to confidential information


Really guys, this is remarkable. How much leeway are they giving this source.

I think I should start creating some leaks myself.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
"High-volume production will begin in the third quarter, with meaningful revenue contribution starting in the Q4 2015."

This would definitely mean Q1 2016 is out. I suppose they could have a small batch ready sometime in late Q1 - Q2 for some supercomputer upgrades depending on how HBM2 is progressing.