Yea. and seems like it is rendered. The HBM is a piss poor paint copy/paste.
Lifelike rendering then.
![]()
I guess we have gone past this.
Fermi end plate closeup
Yea. and seems like it is rendered. The HBM is a piss poor paint copy/paste.
Lifelike rendering then.
![]()
GP204 obviously comes first, and when the yields get better, GP200 comes. The process simply had too many years to get mature, is about time the 16FF+ gets done.
AMD and Nvidia didn't shrunk to any newer node these times because there don't have any newer high performance node to shrunk. They were simply using the best node they can use.
I really don't know if GF14LPP can be used to high performance devices, but TSMC16FF+ surely can. Link: http://community.cadence.com/cadenc...instream-designers-and-internet-of-things-iot
Yea. and seems like it is rendered. The HBM is a piss poor paint copy/paste.
http://www.tsmc.com/tsmcdotcom/PRListingNewsAction.do?action=detail&newsid=9001&language=E
https://www.youtube.com/watch?v=wFSxj_Msc70
4:20, Jen-Hsun holds the Pascal test module.
I haven't read much public info regarding TSMC's 16nm, but since they sucked for the initial 40nm & 28nm, blew donkey on 20nm.. we're somehow supposed to believe they can manage BIG Pascal, with a new uarch, new ram, stacking all in one on an unproven node... no. Tell 'em they dreaming.
Almost everyone who has paid attention will know NV needs to refresh their Tesla stack due to Maxwell's neutered DP compute, so the first big Pascal going to Tesla is a no brainer.
Until new info comes out, I am still going to stick to my prediction that this is a stop-gap 28nm gen and both NV and AMD will have a faster single chip card in 2H of 2016 on a new node.
They aren't getting a Q1 release unless SK Hynix is a quarter ahead on their HBM2 progress.
And what stops Nvidia from using HBM1 if HBM2 isnt available for Nvidia until mid/late 2016?They should had better not tempted such a scam as it s aknowledgment that they are in full panic mode..
http://semiaccurate.com/forums/showpost.php?p=237272&postcount=1120
100% accurate and reliable.
And what stops Nvidia from using HBM1 if HBM2 isnt available for Nvidia until mid/late 2016?
That stops his assumption that Pascal will use GDDR5 right there. He even says "GP104". Nobody said GP104 launch first.
The source of the kitguru article says GP100 coming in Q1, not GP104. He doesnt even says what HBM it will use
Not if AMD`s memory optimizations for HBM over GDDR5 turns out true. They say they have the solution for it. That is something I`m very interested to learn aboutBig Pascal with 4GB of HBM1 in 2016? People should be losing their minds over how terrible that decision is, considering how much flak Fiji is getting for 4GB HBM1 right now.
And what stops Nvidia from using HBM1 if HBM2 isnt available for Nvidia until mid/late 2016?
Not if AMD`s memory optimizations for HBM over GDDR5 turns out true. They say they have the solution for it. That is something I`m very interested to learn about
But I cant help but agree with you that 6/8GB would be nice though. Maybe they`ll do more stacks with GP100?
Are there any specifications available anywhere that says 4 stacks is the limit with HBM1? I understand 4x1GB and 4096bit, but what stops them from adding 8x1GB and doing 8192bit if you disregard the obvious price increase etc? From a technical point of view I meanWith HBM1, 4 stacks, 4GB.
We'll see when actual benches and tests are done. And no you can't guarantee anything you are not the engineers working on the tech.The "memory optimizations" is an attempt to upsell 4GB because they have no other solution. I can guarantee you it wasnt the plan to ship it with 4GB. But since Hynix couldnt deliver its how it ended.
Are there any specifications available anywhere that says 4 stacks is the limit with HBM1? I understand 4x1GB and 4096bit, but what stops them from adding 8x1GB and doing 8192bit if you disregard the obvious price increase etc? From a technical point of view I mean
And what stops Nvidia from using HBM1 if HBM2 isnt available for Nvidia until mid/late 2016?
That stops his assumption that Pascal will use GDDR5 right there. He even says "GP104". Nobody said GP104 launch first.
The source of the kitguru article says GP100 coming in Q1, not GP104. He doesnt even specify what HBM it will use
History is an excellent teacher, so it is safe to say TSMC will not meet their targets in a timely fashion. Again.This assumes TSMC has no problems with their schedule.
I see you ignored my post and still maintaining this as true. Use you analytical thinking.
snip
lthough Taiwan Semiconductor Manufacturing Co. has delayed mass production of chips using its 16nm fabrication processes, this did not happen only because of low yields. According to the company, 16nm yields at TSMC are approaching mature levels. This year TSMC will offer two 16nm process technologies for clients: 16nm FinFET (CLN16FF) and 16nm FinFET+ (CLN16FF+).
Fudzilla (April 2015 article):CLN16FF+ yield is already approaching CLN20SoC yield (which is mature enough to use for commercial products), according to a Cadence blog post. The VP reportedly said that the 16FF+ provided better maturity at risk production than any previous TSMC process. TSMC has received over 12 CLN16FF+ tape outs so far and expects more than 50 product tape outs this year. High-volume production will begin in the third quarter, with meaningful revenue contribution starting in the Q4 2015.
TL;DRTSMCs 16nm FinFET node (16FF) is already online, but the improved 16nm FinFET Plus (16FF+) node should be available soon as well. The company confirmed 16FF+ will enter volume production in mid-2015, roughly three months from now. (July 2015, Cloudfire)
I didnt know nVidia owned kitguru.
I think I should start creating some leaks myself.
