Samsung and GLOBALFOUNDRIES Forge Strategic Collaboration to Deliver 14nm FinFET

Page 15 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

NTMBK

Lifer
Nov 14, 2011
10,239
5,026
136
Intel isn't AMD...

In that case let me rephrase. "Just look at the Pentium 4 debacle!"

We don't know what caused those yield problems, but it seems that it's because of the lithography (lack of EUV). I wouldn't be so fast blaming the GPU. Intel should be competent enough not to run into those issues, and as far as I know, 14nm really is an exception. BTW, Intel is still on a Tick-Tock cadence.

No, we don't know. GPU was just my guess. But the point still stands- Intel have abandoned Tick Tock for GPU, and that could be very risky.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
No, we don't know. GPU was just my guess. But the point still stands- Intel have abandoned Tick Tock for GPU, and that could be very risky.

They don't make no changes to the CPU either, see Ivy Bridge. Tick-Tock still exists, but you mustn't take it too strictly. (Maybe they will do it more strictly, since Skylake will introduce Gen9, so maybe there won't even be a Gen10 with Cannonlake.)
 

NTMBK

Lifer
Nov 14, 2011
10,239
5,026
136
They don't make no changes to the CPU either, see Ivy Bridge. Tick-Tock still exists, but you mustn't take it too strictly. (Maybe they will do it more strictly, since Skylake will introduce Gen9, so maybe there won't even be a Gen10 with Cannonlake.)

I'm not being strict- they've totally abandoned it for GPUs since Ivy Bridge, and you can't deny it. Ivy Bridge was the GPU overhaul, and Haswell was the minor tweak, while Broadwell is again a massive overhaul. You yourself just described it as a "massively updated architecture". ;) I just worry that in their rush to improve their GPU architecture, they're abandoning the principle that has let them deliver so consistently since Conroe. The GPU is such an important part of the SoC, especially for the tablet market, and Intel are taking big risks with it.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
I'm not denying it, Intel call this Tick+. Intel did it wel with Ivy Bridge, we'll see in 2 years how they do with Cannonlake.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
I wonder if AMD and NV will use different foundries AMD+GF vs NV+TSMC. Anyway... Some of you are missing a lot of laughworthy posts. Disable your ignore lists for a comedy.
 

teejee

Senior member
Jul 4, 2013
361
199
116
They made mistakes like any other company, sure, but what does that have to do with how good Gen8 will be?

A massively updated architecture does not necessarily equal a massively improved architecture.
A massive update means a lot of opportunities to make mistakes.

Another thing, comparing number of EU's between different architectures is useless until you have details about both architectures. And I haven't seen such details about Cherry Trail.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,686
1,221
136
well I can already bet whatever you want that it won't beat TK1. Do you ?
Image quality is based on the drivers, while performance on the hardware. I am pretty confident that Intel's Cherry Trail will beat Tegra K1 and Beema/Mullins.
 

jdubs03

Senior member
Oct 1, 2013
377
0
76
A massively updated architecture does not necessarily equal a massively improved architecture.
A massive update means a lot of opportunities to make mistakes.

Another thing, comparing number of EU's between different architectures is useless until you have details about both architectures. And I haven't seen such details about Cherry Trail.

There are also more opportunities for improvement. If Intel wants to challenge AMD on the GPU side for integration they have to execute on significant increases in performance, and the drop to 14nm allows extra area for GPU allocation (hence the extra EU's for both CT and Broadwell-which certainly helps), as well as CPU allocation. But even if there was no drop to 14nm, I still think architecturally there would be a decent improvement on perf/W. Broadwell-U I think will be a very solid chip performance wise, the focus is less likely to be on power consumption because the TDP is the same. Longer-term the U-series is going to pack some serious punch for high-end convertibles such as the Surface Pro 4 and others.

Another thing that interests me is the next revision of ARM v8A: Maya/Artemis, though I don't expect that revision to be in products til ~2017 (based on the 3 year duration between v8/Cortex A57 from announcement to mainstream implementation (not counting Apple, Apple may choose late 2016).

well I can already bet whatever you want that it won't beat TK1. Do you ?

I do think CT will perform higher than TK1, like I said before within 85% of HD4400 in some benchmarks. I could see it easily matching/beating the HD4200 which is still considerably faster than TK1. There will be an absolute advantage for Intel.

But then there is still the issue of Intel's 14nm Tri-Gate vs. 28nm planar. If we take process into account, for example TK1+ 20nm pln, the performance gap from where it currently is to cherry-pick 3DMark Graphics would equal the HD4200 in the i3 version of the Surface Pro 3. And on the CPU-side using 3DMark Physics (its not a bonafide CPU mark, but it is more intensive) the TK1 is already higher; higher MT geekbench than even the i5-4210y too. I can see Denver really challenging on the CPU-side. If 16nmFF+ were used for GPU, obviously I wouldn't be arguing against you. That's why on a medium to longer-term basis I think Nvidia has a real chance to make a splash with Logan+64bit-Kepler, Erista-Maxwell, and then Parker-Pascal (3x increase from Kepler-old map had Volta-4x so that's a downgrade unfortunately), but once Nvidia can close the gap node-wise to 1st gen FinFET- 16nm they'll have a much greater advantage CPU and GPU wise compared to where it is now, even if it's against Intel 10nm (depending on fin material choice). But then there will still be all of the nay-sayers trying to dismiss their capability.
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Then why post them in bold to press your point? You're contradicting yourself.

Skepticism does not mean outright disbelief. btw there are things where the statements of TSMC can be taken without much doubt - such as leveraging 20SOC yield learning for improving 16FF/16FF+ yield learning as they share the same BEOL. Others like when the process will ramp to volume production and process metrics like perf, power, density its better to wait and see how actual products perform and when they come to market.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
A massively updated architecture does not necessarily equal a massively improved architecture.
A massive update means a lot of opportunities to make mistakes.
I remember before Maxwell launched, it was rumored to be a small architectural update, and now people are praising its 2x efficiency improvement.

How I read your comment is that you seem to think that Intel is blindly making a lot of changes to Gen7, putting it in Broadwell and hoping it will be good. That's obviously not how it works. Lots of PhDs were working on this with the goal to improve the architecture, not to simply change it. Because that's why you invest many millions to change an architecture, to improve it. So when Intel says “Broadwell graphics bring some of the biggest changes we’ve seen on the execution and memory management side of the GPU… [the changes] dwarf any other silicon iteration during my tenure, and certainly can compete with the likes of the gen3->gen4 changes.”, then to me that means we should expect a big improvement. That's what Occam's Razor tells us to expect and it's far more sensible than this conspiracy theory.

Another thing, comparing number of EU's between different architectures is useless until you have details about both architectures. And I haven't seen such details about Cherry Trail.
I don't think it's useless. I don't think it's too strange to expect Gen8's EUs to perform better than Gen7's EUs, so peak performance should at least quadruple at the same clock speed.
 

jdubs03

Senior member
Oct 1, 2013
377
0
76
I remember before Maxwell launched, it was rumored to be a small architectural update, and now people are praising its 2x efficiency improvement.

How I read your comment is that you seem to think that Intel is blindly making a lot of changes to Gen7, putting it in Broadwell and hoping it will be good. That's obviously not how it works. Lots of PhDs were working on this with the goal to improve the architecture, not to simply change it. Because that's why you invest many millions to change an architecture, to improve it. So when Intel says “Broadwell graphics bring some of the biggest changes we’ve seen on the execution and memory management side of the GPU… [the changes] dwarf any other silicon iteration during my tenure, and certainly can compete with the likes of the gen3->gen4 changes.”, then to me that means we should expect a big improvement. That's what Occam's Razor tells us to expect and it's far more sensible than this conspiracy theory.


I don't think it's useless. I don't think it's too strange to expect Gen8's EUs to perform better than Gen7's EUs, so peak performance should at least quadruple at the same clock speed.

Haters want to hate.

The jump from Ivy Bridge to Haswell was pretty large, if we see a larger delta in performance between this transition compared to most recent one, than I believe that bodes very well for Gen8. It's just a shame we'll have to wait longer than usual to see the performance gains from both Broadwell and Cherry Trail.
 

NTMBK

Lifer
Nov 14, 2011
10,239
5,026
136
How I read your comment is that you seem to think that Intel is blindly making a lot of changes to Gen7, putting it in Broadwell and hoping it will be good. That's obviously not how it works. Lots of PhDs were working on this with the goal to improve the architecture, not to simply change it. Because that's why you invest many millions to change an architecture, to improve it. So when Intel says “Broadwell graphics bring some of the biggest changes we’ve seen on the execution and memory management side of the GPU… [the changes] dwarf any other silicon iteration during my tenure, and certainly can compete with the likes of the gen3->gen4 changes.”, then to me that means we should expect a big improvement. That's what Occam's Razor tells us to expect and it's far more sensible than this conspiracy theory.

Lots of PhDs went into designing Netburst, too.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
I don't think it's useless. I don't think it's too strange to expect Gen8's EUs to perform better than Gen7's EUs, so peak performance should at least quadruple at the same clock speed.

Just to show you that you shouldn't count number of EUs from different Architectures.
Kepler GK104 has 3x times (1536) the Stream Processors of Fermi GF110(512) but performance only increased 20-30%. Im not saying that Gen8 Intel Graphics will only bring a minor increase in performance but expecting quadruple is not going to happen.
 

NTMBK

Lifer
Nov 14, 2011
10,239
5,026
136

I'm not trying to equate the two- I'm not saying Broadwell is doomed or anything similar, and in fact I hope it is a fantastic part. But I just want to make people aware that throwing resources at a project is no guarantee of success. Massive projects can still go off the rails.

I just get annoyed by the automatic assumption that a product, any product, is going to be amazing based on almost no data. On this forum it so happens that Intel gets hyped the most, so I sometimes come across as an Intel-basher, I guess. *shrug* I'm just a pessimist :)
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Lets talk about the sports news, should we?
Some people are derailing threads like crazy and reporting them does the opposite of what one would expect. Its either intel PR talk, or thread gets locked...
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Just to show you that you shouldn't count number of EUs from different Architectures.
Kepler GK104 has 3x times (1536) the Stream Processors of Fermi GF110(512) but performance only increased 20-30%. Im not saying that Gen8 Intel Graphics will only bring a minor increase in performance but expecting quadruple is not going to happen.

Baytrail uses Gen7, Cherrytrail uses Gen8. And we got Gen8 to look at in Haswell today.

Baytrail got 4EUs, Cherrytrail got 16EUs.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Haswell in Gen 7.5 not 8 ;)

Sorry yes. But since we can see the GT units on Broadwell. We know 1 Gen8 EU will be better than 1 Gen7.x EU.

The node difference will really start to kick in now. And it will really hurt any company that isnt Apple, Intel, Samsung or Qualcomm. The rest of the companies cant afford it.

We are back to the R&D issue.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Sorry yes. But since we can see the GT units on Broadwell. We know 1 Gen8 EU will be better than 1 Gen7.x EU.

As i have explained earlier with Kepler and Fermi Architectures, we shouldn't compare number of EUs between different Architectures. It may well be that Intel Gen 8 EUs have lower performance each (smaller size) than Gen 7/7.5 in order to raise throughput (more units in the same space).