[WccfTech] 16 nm delayed even more, now "deep into 2016"

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
WccfTech, via two other sources, now claiming that 16 nm FinFET+ is delayed even more. Now we're looking at "deep 2016" for mass production.

14 nm for dGPUs will be ready this autumn from Samsung/GloFo. How fast can/will Nvidia switch from TSMC? Qualcomm switched, so why couldn't Nvidia? TSMC are starting to look almost as bad as Intel with their constant delays.

Oh and AMD are laughing their way to the bank if this continues, which isn't good for anyone but them, because it'd be winning by default, not by technical leadership.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
This is a HUGE deal as 20nm was a mess and has essentially been skipped-over except by a few customers using it...
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
This is a HUGE deal as 20nm was a mess and has essentially been skipped-over except by a few customers using it...

It wasnt skipped over. Already plenty of mobile SoCs that use it, or are preparing to. It was skipped over only for HP applications, which includes GPUs.

As for the delay, meh. Was to be expected.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I wasn't really expecting a next generation of cards (Pascal/GCN 3.0) until October/November 2016 given that GM204 launched late Sept 2014. It took NV more than 2.5 years to replace GK204 680 with GM204 980 and 3 years to replace 580 with a 780Ti -- its true successor -- while it's taking AMD more than 3 years to replace 7970 with a 390X -- its true successor. Late 2016 for true next gen 14nm/16nm GPUs would make me happy, since it's about 3 years since 290X/780Ti! Considering Intel delayed BW-E and Skylake-K by 6+ months, 1-2 Q delay for 14nm/16nm isn't a big deal imo.

It seems GPU performance is now taking roughly 3 years to go up 2-2.25X, way longer from 18 months, then 24 months historically. If Big Daddy Pascal/GCN 3.0 big die launch by Oct-Dec 2016, I'd be thrilled!

I also expect as we get down to lower nodes, it's going to be harder to physically shrink transistors and due to increased FAB costs and lower yields associated with lower nodes, I imagine performance scaling will continue dropping off for GPUs or it will take even longer for the performance to double.
 
Last edited:

Pneumothorax

Golden Member
Nov 4, 2002
1,182
23
81
Then bring on the $600 400 watt 2x8pin 28nm 2 Ghz cards with built in H20 AIO cooling. I don't care about power consumption, just gimme performance!
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
Well. All we have to do is look at intel and we see a scary trend.

They are the leaders in have world class foundries.

But something crazy has happened after 32nn........

Just look at the TDP of the 5820/5830k and compare back to ivy. Sandy, and westmere. Also look at the ivy overclocked or haswell overclocked... Not only does the max MHz struggle to keep pace with sandy, the power consumption savings vanishes.

There is a real problem that is huge amount us. Advancing the node, shrinking and packing in more transistors is not advancing the top end High performance anywhere close to the days of the past. This is very concerning and a most likely not something we can solve easily.

Intel chips have only moved forward because of the architecture advancements. The node offered lower consumption in the less demanding scenarios but has been easy to brush under the rug. doesn't anyone see the had intel been neck to neck with AMD all this time, this would be a complete tragedy. Devastating and in everyone's face.

And that is the situation we have in The GPU space. AMD and nvidia are in a close battle. They are both pushing the clock speeds and struggling for ultimate performance out of these nodes, pushing it all the way to the cutting edge. Shrinking nodes has been very fruitful over the years, its offered so much over and over and over.......

But I fear those days are gone. Shrinking nodes so small, packing transistors so tight there is no way to dissipate heat as efficiently in the high MHz applications. These nodes offer power reduction and performance in the much more modest applications not so much max performance/max MHz.

This is the future.....
Slow moving for the top end. When been moving so fast for years and have come to a full on screech halt. Unless there is some major break thru, we will only inch forward the next few years. GPU advancements will mostly come from the architecture and not so much from the node. We already see this happening. For the last few yrs, GPU advancement has been undeniably slower than the years of the past. I expect this pace is the new norm. Advancements will be hard fought and progress will be slower.
This is the future and we are already experiencing it.
 

videogames101

Diamond Member
Aug 24, 2005
6,783
27
91
Well. All we have to do is look at intel and we see a scary trend.

They are the leaders in have world class foundries.

But something crazy has happened after 32nn........

Just look at the TDP of the 5820/5830k and compare back to ivy. Sandy, and westmere. Also look at the ivy overclocked or haswell overclocked... Not only does the max MHz struggle to keep pace with sandy, the power consumption savings vanishes.

There is a real problem that is huge amount us. Advancing the node, shrinking and packing in more transistors is not advancing the top end High performance anywhere close to the days of the past. This is very concerning and a most likely not something we can solve easily.

Intel chips have only moved forward because of the architecture advancements. The node offered lower consumption in the less demanding scenarios but has been easy to brush under the rug. doesn't anyone see the had intel been neck to neck with AMD all this time, this would be a complete tragedy. Devastating and in everyone's face.

And that is the situation we have in The GPU space. AMD and nvidia are in a close battle. They are both pushing the clock speeds and struggling for ultimate performance out of these nodes, pushing it all the way to the cutting edge. Shrinking nodes has been very fruitful over the years, its offered so much over and over and over.......

But I fear those days are gone. Shrinking nodes so small, packing transistors so tight there is no way to dissipate heat as efficiently in the high MHz applications. These nodes offer power reduction and performance in the much more modest applications not so much max performance/max MHz.

This is the future.....
Slow moving for the top end. When been moving so fast for years and have come to a full on screech halt. Unless there is some major break thru, we will only inch forward the next few years. GPU advancements will mostly come from the architecture and not so much from the node. We already see this happening. For the last few yrs, GPU advancement has been undeniably slower than the years of the past. I expect this pace is the new norm. Advancements will be hard fought and progress will be slower.
This is the future and we are already experiencing it.

Maybe. It's not like the industry isn't aware that new ideas are needed, there's a lot of cool stuff in the semiconductor research pipeline :thumbsup:
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
Well. All we have to do is look at intel and we see a scary trend.

They are the leaders in have world class foundries.

But something crazy has happened after 32nn........

Just look at the TDP of the 5820/5830k and compare back to ivy. Sandy, and westmere. Also look at the ivy overclocked or haswell overclocked... Not only does the max MHz struggle to keep pace with sandy, the power consumption savings vanishes.

There is a real problem that is huge amount us. Advancing the node, shrinking and packing in more transistors is not advancing the top end High performance anywhere close to the days of the past. This is very concerning and a most likely not something we can solve easily.

Intel chips have only moved forward because of the architecture advancements. The node offered lower consumption in the less demanding scenarios but has been easy to brush under the rug. doesn't anyone see the had intel been neck to neck with AMD all this time, this would be a complete tragedy. Devastating and in everyone's face.

And that is the situation we have in The GPU space. AMD and nvidia are in a close battle. They are both pushing the clock speeds and struggling for ultimate performance out of these nodes, pushing it all the way to the cutting edge. Shrinking nodes has been very fruitful over the years, its offered so much over and over and over.......

But I fear those days are gone. Shrinking nodes so small, packing transistors so tight there is no way to dissipate heat as efficiently in the high MHz applications. These nodes offer power reduction and performance in the much more modest applications not so much max performance/max MHz.

When it comes to GPU's, shrinking offers more improvements, as it allows for more cores, which is much more useful than on CPU's. Parallel processing will use more cores a lot better than serial processing that is predominant on the CPU.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
Oh I know but the dilemma is how many cores can you cram in vs how high you can clock it.

There is the hard limit of the physical size of your chip to.
Node shrinks allow more transistors per mm and packing them in tight is how we start having issues transferring heat. Smaller nodes should use less voltage but we aren't really shrinking nodes these days and terms such as 14nm are just invented for marketing.

See, this fundamental problem does exist for GPUs as well......even if it might be more manageable in some ways.
See, look at maxwell on 28nm. Some people.....many people are running their chips at 1500mhz or more. That's what we need to think about. If the 20nm node allows someone to add in more cores in the same space but the top speed can't go to 1500mhz, then it's a matter of trade off. Is it worth it or not.
The big point I want to make is that the max clock speed has been going up and up on graphics cards. But now we are at a point that I am just not sure if it continues. GPUs should be able to advance still, and its not as bad as the situations CPUs fell into. But it is true that we have expanded fast to a point, and that point has come. Now things will grow slower, in a more strategic way. Architectural improvements are becoming the way forward while node shifts become less and less dependable
 

RocketPuppy

Junior Member
Jan 14, 2015
3
0
0
This is to be expected from now on unless we get a big breakthrough. Things are getting small enough cross-talk is major problem, and on the CPU end they have had to pull some engineering trickery to prevent issues. Solar flares are a legitimate engineering concern when designing chips at these tiny processes.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
This is to be expected from now on unless we get a big breakthrough. Things are getting small enough cross-talk is major problem, and on the CPU end they have had to pull some engineering trickery to prevent issues. Solar flares are a legitimate engineering concern when designing chips at these tiny processes.

A solar flair is an issue when you have a pound of aluminum on top of the chip inside an aluminum box?
 

NTMBK

Lifer
Nov 14, 2011
10,522
6,041
136
This is a HUGE deal as 20nm was a mess and has essentially been skipped-over except by a few customers using it...

Yes, only the most successful mobile chipmakers on the planet are using it... oh wait.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Yes, only the most successful mobile chipmakers on the planet are using it... oh wait.

And 1 of them got 2 overheating chips on 20nm that barely reached 2/3rds of the frequency target :p

Anyway, its true, 20nm is a massive mess. And I doubt 14/16nm is any different. The problem is design cost have boomed and designers sit with new unknown issues and lack the tools needed. So with the exclusion of those that can just throw money at the problem like a waterfall. For everyone else its a cluster of poop.
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
Just look at the TDP of the 5820/5830k and compare back to ivy. Sandy, and westmere. Also look at the ivy overclocked or haswell overclocked... Not only does the max MHz struggle to keep pace with sandy, the power consumption savings vanishes.

It only seems like something is amiss with current process nodes compared to 32nm, but that is only because of how intel has chosen to focus their uA changes. To be fair, a 5830k is an absolute beast, if you have the application that can use such power. It pretty much requires you to use AVX. If you dont, then you are not going to consume anywhere near the TDP limit. That's why they can get away with putting a 5 watt die shrunk haswell into a fanless tablet. Windows really doesnt use AVX, nor does just about any typical consumer app that a tablet user would run. If you try to run 2 threads of AVX heavy code, it is going to bog down and throttle hard. But javascript runs pretty quick.

There is no doubt that 14nm is significantly more power efficient than 32nm. At least 5 times as efficient. A 4.5w Core M-5y70 scores 1263/2842 on passmark (Single thread/ multithread) while a 32nm core i5-2467M @ 17W scores just 956 / 2341. So it is almost 30% faster at 1/4 the power.