TSMC begins 16nm FinFET volume production

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Hahahaha exactly. Next gen node, new architecture, bandwidth, MULTI-GPU FURY X MASTER RACE 4k PROJECT CARS IS TEH D3V1L

Haha, nice! :thumbsup:

All of this x 1000. This is so true. In CPU's AMD is so far behind that only a few leftover die-hards are preaching from the mount. At least in GPU's AMD is hanging in the game, even though the time it takes to catch up is increasing. I notice how when every new AMD high end GPU is imminent, there is massive Nvidia-killing hype and expectations of 20% better performance than Nvidia's comparable flagships are regurgitate over and over again.

I admit, probably my Radeon bias, but every time a new Radeon GPU is near release I totally get sucked into the hype. Probably why I feel so...betrayed? When they launch. The VLIW4 vs VLIW5 claims, and how HD 6970 was going to be amazing (barely), but at least HD 7970 was amazing (least I thought so.)

I was willing to accept Fury X being 90% of GTX 980 Ti at a lower price point, heck even equal with the water cooler, but then the "it's gonna be 20% faster" nonsense got my expectations so high when I finally saw the results I guess I snapped haha.

Oh well, Win10 is definitely shaping up to be a solid OS, but I just don't see what people see in their crystal balls. Some how I doubt it will change the tides. The I guess time for the classic, "Wait and See."
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
What I keep reading, and this isn't the first time GloFlo floundered, either, is it? Yet, that deal AMD made keeps them dropping cash on them with basically no results to show for it.

But I guess, with DX12, and Win10, and GloFlo - AMD is some how going to dominate. Woof. These expectations are so high, I just keep being reminded of the 290X on Mantle beating 780 SLI claim.

Glofo is the joke of the industry. While TSMC miss deadlines with 2 years usually(Everyone is used to that). Glofo is just completely gone. The simple fact they had to license Samsung 14nm and even then, boosted by Apple couldnt get it to work. So Apple now effectively dropped Glofo to reward TSMC with 30% of the A9 production (Samsung gets the other 70%) is utterly amazing.

And lets not start with that WSA they signed with a 3rd class foundry that Glofo is.

I still remember some people claiming the 300 series+Fury would give AMD 30-40% GPU marketshare.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Haha, nice! :thumbsup:



I admit, probably my Radeon bias, but every time a new Radeon GPU is near release I totally get sucked into the hype. Probably why I feel so...betrayed? When they launch. The VLIW4 vs VLIW5 claims, and how HD 6970 was going to be amazing (barely), but at least HD 7970 was amazing (least I thought so.)

I thought it was just okay at launch and that was backed up by Nvidia's "mid-range" Kepler beating it until the 7970Ghz launched. But it's developed some great legs over the years and turned out to be a fantastic card for anyone that held onto it.

Oh well, Win10 is definitely shaping up to be a solid OS, but I just don't see what people see in their crystal balls. Some how I doubt it will change the tides. The I guess time for the classic, "Wait and See."

I'm not sure what other peopel think Win10 will do for AMD that it won't do for Nvidia either. Both will benefit equally from closer-to-the-metal development and better cpu thread utilization.
 
Last edited:

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
I thought it was just okay at launch and that was backed up by Nvidia's "mid-range" Kepler beating it until the 7970Ghz launched. But it's developed some great legs over the years and turned out to be a fantastic card for anyone that held onto it.



I'm not sure what other peopel think Win10 will do for AMD that it won't do for Nvidia either. Both will benefit equally from closer-to-the-metal development and better cpu thread utilization.

They will both benefit, but only in certain types of operations. PS4 has had asynch shader capability since launch, only 3 titles use it. Only one mantle title uses it. It's also been in OpenGL since 2008.


The advantage for AMD comes with GCN (all versions) vs Kepler and Maxwell v1 (750/750Ti).

Having said that, Maxwell V2 (970/980/960) is a different ballgame. It looks to me like GCN 1.1 / 1.2 can't really scale up well beyond 2 CPUs, whereas Maxwell V2 is able to use 4+.

This is from AT's article here, it reflects how the arch can use the graphics command processor and compute units for async / parallism.

You can see that GCN 1.0 - 1.2 is theoretically superior in this use case to Nvidia for everything prior to Maxwell V2, but Maxwell V2 is superior to all GCN archs.



AMD GCN 1.2 (285) 1 Graphics + 8 Compute

AMD GCN 1.1 (290 Series) 1 Graphics + 8 Compute

AMD GCN 1.1 (260 Series) 1 Graphics + 2 Compute 2 Compute

NVIDIA Maxwell 2 (900 Series) 1 Graphics + 31 Compute

NVIDIA Maxwell 1 (750 Series) 1 Graphics

NVIDIA Kepler GK110 (780/Titan) 1 Graphics

NVIDIA Kepler GK10x (600/700 Series) 1 Graphics


The result is that GCN and anything prior to Maxwell V2 scales well only on 2 cores; Maxwell V2 scales well on 4 cores. What's also interesting here is that despite being theoretically crippled here, the Maxwell V1 is still whipping the 260 series GCN 1.1 That's probably drivers, since the 260 series theoretically should be faster given the above advantage.

71455.png
 

Trumpstyle

Member
Jul 18, 2015
76
27
91

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Next to no one will use TSMC's 16nm ...

It's certainly not attractive to small time chip designers because of design costs and it's also not attractive for big time chip designers when it's simply not the best what the industry has to offer spec wise in comparison to Samsung's 14nm process ...

Soon enough even GF will be doing some damage to TSMC ...
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Your link says tsmc will start producing a9 in q4 when new iphone launches in september, riiiight :p

Its no surprise considering the time it takes to change from Glofo to TSMC.

Glofo simply cant deliver.

And Samsung still accounts for 70% of the A9. While TSMC accounts for 30% plus all the A9X.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
And Samsung still accounts for 70% of the A9. While TSMC accounts for 30% plus all the A9X.

You can't just have the same chip being produced on different transistors since that's going to be a logistics and technical marketing nightmare ...

How is Apple going to explain the new iPhone having different power consumption and battery life ?

TSMC's 16nm process node is NOT at all comparable to Samsung's 14nm process node when it comes power, performance, and area scaling characteristics ...

If Apple does produce an A9X chip, it would be on Samsung's 14nm process node again rather than TSMC's 16nm process node ...

Performance is key for the A9X and TSMC simply can't deliver on that front ...
 
Feb 19, 2009
10,457
10
76
PPS Titan X is still GM200 and is faster than Fury X at 4k and has considerably more OC headroom. AMD has better multi GPU scaling but that has nothing to do with this.

Thought you would be beyond that kind of blind bias to be man enough to give credit where it's due.

All the 4K benches at playable settings (not single card 30fps, but multi-GPU) have shown Fury X to beat even Titan X, even massively OC ones since SLI scales worse. Been over this in many other threads.

When you say such a broad statement as GM200 is faster than Fury X, it needs to includes it only applies to 1440p and below. Discarding the high-end 4K results because it doesn't suit your 1440p setup is ignoring a growing market & audience for Fury X.
 
Feb 19, 2009
10,457
10
76
They will both benefit, but only in certain types of operations. PS4 has had asynch shader capability since launch, only 3 titles use it. Only one mantle title uses it. It's also been in OpenGL since 2008.


The advantage for AMD comes with GCN (all versions) vs Kepler and Maxwell v1 (750/750Ti).

Having said that, Maxwell V2 (970/980/960) is a different ballgame. It looks to me like GCN 1.1 / 1.2 can't really scale up well beyond 2 CPUs, whereas Maxwell V2 is able to use 4+.

This is from AT's article here, it reflects how the arch can use the graphics command processor and compute units for async / parallism.

You can see that GCN 1.0 - 1.2 is theoretically superior in this use case to Nvidia for everything prior to Maxwell V2, but Maxwell V2 is superior to all GCN archs.



AMD GCN 1.2 (285) 1 Graphics + 8 Compute

AMD GCN 1.1 (290 Series) 1 Graphics + 8 Compute

AMD GCN 1.1 (260 Series) 1 Graphics + 2 Compute 2 Compute

NVIDIA Maxwell 2 (900 Series) 1 Graphics + 31 Compute

NVIDIA Maxwell 1 (750 Series) 1 Graphics

NVIDIA Kepler GK110 (780/Titan) 1 Graphics

NVIDIA Kepler GK10x (600/700 Series) 1 Graphics

Maxwell 2 does not have 31/32 compute engines. Ryan Smith is wrong and refuses to correct his table.

Kepler & Maxwells have 1 queue engine that can handle up to 32 queues.
GCN from R290/X onwards have 1 CP and 8 queue engines (ACE) that each can handle 8 queues.

78767566nug1.jpg


gfxcomputedx12pzu7w.jpg


@Ryan Smith
Thanks to your refusal to fix your chart, people are spreading misinformation.

In reference to why AMD lists in mixed/mode 1 rendering + 8 compute, because of their 8 independent queue engine, each able to add 1 compute queue asynchronously to the main rendering from the CP, note they do not add more than 1 because:

https://forum.beyond3d.com/threads/direct3d-feature-levels-discussion.56575/page-18#post-1851420
AMDs asyncronous compute implementation is also very good, as the fully bindless nature of their GPU means that the CUs can do very fine grained simultaneous execution of multiple shaders. Don't get fooled by the maximum amount of compute queues (shown by some review sites). Big numbers don't tell anything about the performance. Usually running two tasks simultaneously gives the best performance. Running significantly more just trashes the data and instruction caches.
 
Last edited:

nenforcer

Golden Member
Aug 26, 2008
1,775
14
81
But I guess, with DX12, and Win10, and GloFlo - AMD is some how going to dominate. Woof. These expectations are so high, I just keep being reminded of the 290X on Mantle beating 780 SLI claim.

That's what I'm counting on - and why I went with 2 X Radeon R9 295 and an AMD FX-8350 for my Windows 10 build.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
That's what I'm counting on - and why I went with 2 X Radeon R9 295 and an AMD FX-8350 for my Windows 10 build.

I'm just hoping I can hold out until Zen comes out :(

My Intel CPU is gonna bottleneck me so bad when Win10 is out. I just know it.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
You can't just have the same chip being produced on different transistors since that's going to be a logistics and technical marketing nightmare ...

How is Apple going to explain the new iPhone having different power consumption and battery life ?

TSMC's 16nm process node is NOT at all comparable to Samsung's 14nm process node when it comes power, performance, and area scaling characteristics ...

If Apple does produce an A9X chip, it would be on Samsung's 14nm process node again rather than TSMC's 16nm process node ...

Performance is key for the A9X and TSMC simply can't deliver on that front ...

TSMC got all the A9X allocation. And do you know the exact electrical properties of the 2 processes? Also even if they all come from Samsung, you still end up with different chips. The only goal is they all hit the desired targets. If they are just at target or over doesnt matter.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Thought you would be beyond that kind of blind bias to be man enough to give credit where it's due.

All the 4K benches at playable settings (not single card 30fps, but multi-GPU) have shown Fury X to beat even Titan X, even massively OC ones since SLI scales worse. Been over this in many other threads.

When you say such a broad statement as GM200 is faster than Fury X, it needs to includes it only applies to 1440p and below. Discarding the high-end 4K results because it doesn't suit your 1440p setup is ignoring a growing market & audience for Fury X.

That's great and I'm happy for you, but I wasn't talking about more than 1 GPU. You keep bringing up multi-GPU solutions. You're the only one bringing it up. I made a quick comment to appease you about how CFX has superior scaling, then I got back to the subject at hand - that Nvidia's best GPU (notice how singular is implied) is faster than AMD's best GPU (again, not mention of multiple GPU's), and does it with 50% less bandwidth. Therefore if Nvidia is ready to launch big Pascal on 16nm FF before HBM2 is ready, then they can probably get away with a 512-bit bus running 7ghz vram just fine in the interim.

If you want to talk about which 16nm FF mutli-GPU setup is going to be faster then go ahead but I won't humor you because I don't run or care about multi-GPU setups. Never have, never will.
 
Last edited:

PPB

Golden Member
Jul 5, 2013
1,118
168
106
Another post made by the same suspect, another fail prediction that will be sig worthy in just a few months. He wont quit will he?


But to add some common sense, you just dont do a port to another fab just to get your missing 30% capacity when in the other hand you have an option with the very same procesa. If you have yo choose your backup provider between fail fab A with different process and with the need of having your design ported, and fail fab B, which has a copycat prpcess from your main provider. The anwser is so simple. Tscm could deliver so much that we were stuck 5 yrs with 28nm. /sarc
 
Last edited:

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
TSMC got all the A9X allocation. And do you know the exact electrical properties of the 2 processes? Also even if they all come from Samsung, you still end up with different chips. The only goal is they all hit the desired targets. If they are just at target or over doesnt matter.

All you need to know is that Samsung's 14nm process is better than TSMC's 16nm process ...

You can get different chips but that won't mean it's not significantly harder to hit that target on a different transistor ...

That article you posted just raises even more doubts going by the comments ...

Samsung selling 5 or 10 million units of their top end phone a quarter will hardly dent their fab capacity ...

Even if GF was only hitting 30% yield at that time that doesn't mean they can't do some optimization schemes over the 4 or 5 months with their size to get the yield to be at a more usable 40% and cut a good deal for Apple by reducing the price by 8% rendering only a 17% premium compared to Samsung ...

The S-2 fab line isn't the only plant that Samsung has that can produce 14nm chips on when there's also the S-1 to do that. If Samsung were to start producing Apple A9s now they can probably expect hit a realistic target of 65 or 70 million chips alone before christmas comes ...

How can you be so sure that TSMC will be better for Apple's case when they still won't be ready for the new iPhone launch or some other issues cropping up in their plants ?
 
Last edited:

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Another post made by the same suspect, another fail prediction that will be sig worthy in just a few months. He wont quit will he?


But to add some common sense, you just dont do a port to another fab just to get your missing 30% capacity when in the other hand you have an option with the very same procesa. If you have yo choose your backup provider between fail fab A with different process and with the need of having your design ported, and fail fab B, which has a copycat prpcess from your main provider. The anwser is so simple. Tscm could deliver so much that we were stuck 5 yrs with 28nm. /sarc

This ...

Neither IDMs are looking impressive for Apple and I highly doubt you can port a chip design to a different process node at a last minute decision in less than 6 months ...

Fail fab A and fail fab B indeed ... :D
 
Last edited:

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Fairly confident they'll have been running every single remotely possible option for a long time now.

This isn't a cost limited thing :) The amount of money (and prestige) riding on getting next gen iphones out promptly and in big numbers is insane.