[semiaccurate] Coffee Lake points to issues with Intel’s 10nm process

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

itsmydamnation

Diamond Member
Feb 6, 2011
3,072
3,897
136
I didn't miss them, for gaming the 8C version is useful for games with a good scaling over 4C, that's it. Bandwidth is a bigger factor btw. It can bring a nice boost in some games, faster RAM does it for Dualchannel.
You are wrong here, games aren't bandwidth limited at all. But lower memory access times will always help performance. higher speed memory at same amount of cycles to access equals lower latency.

I find it odd people think having 10GB of bandwdith per core ( quad core dual channel ddr4 2400) is bandwidth limited. AFAIK each core (>Haswell) support 16 outstanding memory requests. with ~50ns of memory latency that means in the worst case is 128bits (sse) * 16 per 50ns of time per core.

128 *16 * 1,000,000,000 /50 /8 /1024 /1024 = 4883MB/s per core
128=sse width
*16 = outstanding accesses
*1,000,000,000 = move everything to ns time per second
/50 = reduce by memory access time
/8 = convert bits to bytes
/1024 = convert bytes to kilo bytes
/1024 = convert kbytes to megabytes

This doesn't factor in any hazards accessing the cache system or the dram itself, which will happen and make that numbers even lower. But by lowering memory latency over a given period of time you can have more outstanding requests filled and thus better performance.

This is why those memory bandwidth tests are stupid because they are using streaming prefetchers ( moving large amount of contiguous data into L2/L3) which dont do very much for the types of workloads people then think they matter for ( low ILP code).

People in particular would bash steamroller/execavator over their low steam prefetching without understanding what they were actually bashing.


*now i know we move cache lines but the point still stands
 
  • Like
Reactions: coffeemonster

Sven_eng

Member
Nov 1, 2016
110
57
61
I didn't miss them, for gaming the 8C version is useful for games with a good scaling over 4C, that's it. Bandwidth is a bigger factor btw. It can bring a nice boost in some games, faster RAM does it for Dualchannel.

Bandwidth only is a factor at the high-end but where Intel plays its less than nothing. The reason Intel failed so badly in iGPU is they never understood this and built badly balanced chips. Even AMD didn't understand this until recently, Nvidia figured it out years ago.

It will be funny to see your response when Intel finally kills their iGPU segment. :)
 
Last edited:

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
Bandwidth only is a factor at the high-end but where Intel plays its less than nothing. The reason Intel failed so badly in iGPU is they never understood this and built badly balanced chips. Even AMD didn't understand this until recently, Nvidia figured it out years ago.

It will be funny to see your Mikk/Paran/Rofas/Other response when Intel finally kills their iGPU segment. :)

By "Failed so badly" you mean "almost completely destroyed the market for low-end desktop video cards" AND "almost completely destroyed the market for dGPU in laptops."

Intel's iGPU has mostly failed at the midrange and above since almost no one wants Iris / Iris Pro. At the low end it has been a complete success and intel has no sane reason to drop iGPU any time soon.
 
  • Like
Reactions: DigDog and Sweepr

Sven_eng

Member
Nov 1, 2016
110
57
61
By "Failed so badly" you mean "almost completely destroyed the market for low-end desktop video cards" AND "almost completely destroyed the market for dGPU in laptops."

Intel's iGPU has mostly failed at the midrange and above since almost no one wants Iris / Iris Pro. At the low end it has been a complete success and intel has no sane reason to drop iGPU any time soon.

There is a big difference between people accepting what they got and what they could have got.

Intel is abandoning their iGPU division and going with AMD graphics for a good reason.
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
There is a big difference between people accepting what they got and what they could have got.

Intel is abandoning their iGPU division and going with AMD graphics for a good reason.

Isn't that still just speculation? All the links I see are what-ifs.
 

DrMrLordX

Lifer
Apr 27, 2000
22,900
12,965
136
I didn't miss them, for gaming the 8C version is useful for games with a good scaling over 4C, that's it. Bandwidth is a bigger factor btw. It can bring a nice boost in some games, faster RAM does it for Dualchannel.

. . . guess that quad channel RAM is good for something after all? Might be quite something on Skylake-X.

for the Mainstream platform it is a big deal. If you are really denying this then I only can say you have no clue.

It's overdue, which is why it's no longer a big deal. It's expected. 6c on Skylake-k would have been a big deal. 6c on Kabylake-k would have been pretty cool. 6c in Q1 2018 though (barring further delays)? Cmonnnnn. And the timing?

One gets the impression it never would have happened were it not for pressure from the red team.

Intel's iGPU has mostly failed at the midrange and above since almost no one wants Iris / Iris Pro.

GT4e (w/ eDRAM) on the desktop might have been enough to make me try an Intel CPU had it come out maybe 4-6 months ago. As it was it got put on q4 2016 on the old roadmaps, and then poof! it was gone. So no love for the iGPU fanatics eh.

Intel is abandoning their iGPU division and going with AMD graphics for a good reason.

I'm . . . not so sure that's what's really happening. Though it does look like they're going back to the drawing board. Just don't expect GCN iGPUs in i7s okay? I certainly don't.
 
Mar 11, 2004
23,444
5,852
146
By "Failed so badly" you mean "almost completely destroyed the market for low-end desktop video cards" AND "almost completely destroyed the market for dGPU in laptops."

Intel's iGPU has mostly failed at the midrange and above since almost no one wants Iris / Iris Pro. At the low end it has been a complete success and intel has no sane reason to drop iGPU any time soon.

I think that was likely for two simple reasons, the push for ever more compactness and manufacturers wanting to spend as little as possible (and those both helped a lot on laptops). Intel was already dominating (by installed base) with their iGPU before they integrated it into the CPU.

I don't think they destroyed the dGPU market in laptops (if that were even remotely close, then I don't think Microsoft would have put the anemic dGPU option in the Surface Book; in fact I think that exactly shows the opposite is happening; likewise Nvidia's focus on Pascal in mobile form factors and a glut of affordable gaming capable laptops).

Now if you said that Intel's iGPU is good enough for most people's general actual portable laptops use, I'd agree. But I also think people want to be able to have more graphic performance (workstation/pro laptops and for gaming). And hopefully they'll keep improving external GPU support so that we can get that.

I agree, but I think that's mostly because of price. And yeah Intel is not dropping iGPU anytime soon.

Isn't that still just speculation? All the links I see are what-ifs.

It is.

Even if they switch cross licensing to AMD instead of nVidia, it would still be an igpu built by intel.

Actually, I have a hunch that isn't the case. I don't think we'll see a major shakeup of the iGPU on Intel. It will improve some (and I think in ways that are about boosting overall performance, essentially trying to make good on heterogeneous potential) but isn't slapping RTG stuff onto their GPU.

I think the RTG aspect, aside from the patent licensing deals (to prevent lawsuits), is about premium designs. Apple being the major concern. Intel is trying to keep Apple happy and wanting to use Intel's chips (Apple switching from Intel to their own ARM designs would be a huge blow and a lot of people will take that as a signal of x86 really dying). Apple also wants strong GPUs, and they have a good relationship with AMD/RTG. So, Intel is going to put a decent sized RTG GPU and their CPU on an interposer, possibly with some shared fast memory; wonder if the GPUs can do Optane?)

I really think it is a move Intel sees necessary (as I'm sure AMD would be more than happy to work with Apple on integrating their GPUs into their SoC designs, so its not like AMD couldn't still offer the strong GPU that Apple would want if Apple felt they had a good enough CPU design).

Its a win for AMD, as they're not likely to make real progress into laptops even with Zen (which seems to be good, even with regards to power use, but I just think Intel still has a big leg up in the mobile/efficiency realm and will for probably a few years; even with a competitive Zen I think Apple might stick with Intel for consistency and they have the clout to get favorable pricing from Intel, so they'd likely just leverage that for better pricing than shake things up too much). But it gets AMD in some premium laptops, an area they've somewhat struggled with in recent years (and unless they make massive inroads with Vega, they could actually get further behind Nvidia who made it a focus of their entire consumer chip design and where Pascal definitely excels).
 

mikk

Diamond Member
May 15, 2012
4,296
2,382
136
You are wrong here, games aren't bandwidth limited at all. But lower memory access times will always help performance. higher speed memory at same amount of cycles to access equals lower latency.

I'm not wrong. Bandwidth and latency both helps.

http://www.hardware.fr/getgraphimg.php?id=222&n=3

Absolute latency from DDR4-2133 CL15 it slower than from DDR3-1600, and still it is as fast. Only possible because bandwith helps. From my own test going from DDR4-2133 CL14 to DDR4-3200 CL14 I gained 20% in pcars, both equally shifted between latency and bandwidth.
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,072
3,897
136
I'm not wrong. Bandwidth and latency both helps.

http://www.hardware.fr/getgraphimg.php?id=222&n=3

Absolute latency from DDR4-2133 CL15 it slower than from DDR3-1600, and still it is as fast. Only possible because bandwith helps. From my own test going from DDR4-2133 CL14 to DDR4-3200 CL14 I gained 20% in pcars, both equally shifted between latency and bandwidth.
Why do people never give actual links to you have to go find the data they used themselves..........
http://www.hardware.fr/articles/940-5/cpu-ddr4-vs-ddr3-pratique.html


So explain why your ARMA 3 results almost exactly match the latency results and have no correlation to the memory read or write results?

latency
http://www.hardware.fr/getgraphimg.php?id=222&n=6
wrtie
http://www.hardware.fr/getgraphimg.php?id=222&n=5
read
http://www.hardware.fr/getgraphimg.php?id=222&n=4

and your Pcar will be all because of latency.......
 

Excessi0n

Member
Jul 25, 2014
140
36
101
From my own test going from DDR4-2133 CL14 to DDR4-3200 CL14 I gained 20% in pcars, both equally shifted between latency and bandwidth.

If you wanted the latency to be the same you should have had the memory at 3200/CL21 for the latter test, since a higher clockspeed reduces the amount of time taken by each cycle. As you've described it, your second example has a latency ~2/3 that of the first.

That being said, I'd be surprised if additional bandwidth didn't help at all. Sure, games will probably never use the full throughput available with modern RAM. But that doesn't necessarily matter. Increasing the bandwidth will still decrease the time it takes to finish transferring data from memory, even if the amount of data moved over a given time (seconds, say) is far less than the memory is actually capable of.

Damn, now I want to do some testing with my (very) heavily modded Skyrim install. I've observed a hilariously huge improvement when going from 2133/15 to 3200/16, but I haven't done any testing to isolate the effects of bandwidth and latency.
 
  • Like
Reactions: cytg111

Borealis7

Platinum Member
Oct 19, 2006
2,901
205
106
CES update: Intel showed a working 10nm chip, Cannon Lake, which is due end of 2017. perhaps 10nm is not as problematic as SA thinks.
 

NTMBK

Lifer
Nov 14, 2011
10,442
5,795
136
CES update: Intel showed a working 10nm chip, Cannon Lake, which is due end of 2017. perhaps 10nm is not as problematic as SA thinks.

A single 10nm chip is no indication of how bad yields are. For all we know, that may be the single working chip from the entire wafer.

As for end of 2017- that is for ultra low power only, similar to Broadwell's launch. Expect a similar trickle of supply.
 

Nothingness

Diamond Member
Jul 3, 2013
3,298
2,372
136
A single 10nm chip is no indication of how bad yields are. For all we know, that may be the single working chip from the entire wafer.

As for end of 2017- that is for ultra low power only, similar to Broadwell's launch. Expect a similar trickle of supply.
Yeah, people seem to forget what happened at 14nm launch. I remember Krzanich lying at IDF, claiming everything was fine at 14nm, only to say there were issues a few weeks later; lying in front of investors is much more dangerous :D
 

KompuKare

Golden Member
Jul 28, 2009
1,228
1,597
136
A single 10nm chip is no indication of how bad yields are. For all we know, that may be the single working chip from the entire wafer.

As for end of 2017- that is for ultra low power only, similar to Broadwell's launch. Expect a similar trickle of supply.
If similar were to include a desktop part with Iris Pro and Crystalwell like the i5-5675C / i7-5775K it might be interesting. I know Intel don't like spending extra on the eDRAM for desktop but maybe like with Broadwell they will release a low volume part.
 

NTMBK

Lifer
Nov 14, 2011
10,442
5,795
136
If similar were to include a desktop part with Iris Pro and Crystalwell like the i5-5675C / i7-5775K it might be interesting. I know Intel don't like spending extra on the eDRAM for desktop but maybe like with Broadwell they will release a low volume part.

Remember that the Iris Pro part was only available a long time after the launch of the Core M chips.
 

DrMrLordX

Lifer
Apr 27, 2000
22,900
12,965
136
GT3e/GT4e + eDRAM on desktop is dead on all current Intel roadmaps. Don't hold your breath waiting for that.
 
Mar 10, 2006
11,715
2,012
126
A single 10nm chip is no indication of how bad yields are. For all we know, that may be the single working chip from the entire wafer.

As for end of 2017- that is for ultra low power only, similar to Broadwell's launch. Expect a similar trickle of supply.

Yeap. 10nm for anything other than ultra low power won't come until 2019, IMHO.
 
Mar 10, 2006
11,715
2,012
126
Yeah, people seem to forget what happened at 14nm launch. I remember Krzanich lying at IDF, claiming everything was fine at 14nm, only to say there were issues a few weeks later; lying in front of investors is much more dangerous :D

Yeah, it was absolutely hilarious.

BK on Sept. 10, 2013:

The next thing I want to talk about, the silicon. I’ve got to talk about silicon. Can't have an Intel presentation without talking about Moore's Law a little bit here. I'm here to introduce the first 14- nanometer PC. This is a Broadwell-based system, it's fully Page 6 operational. People have asked where we are on development of 14 nanometers. I'm here to show you a working system. I’ll go into “Cut The Rope” and play the game. But this is it, folks -- 14 nanometers is here, it's working, and we'll be [in production] by the end of this year

BK on October investor call:
We continue to make progress with the industry's first 14 -nanometer manufacturing process and our second-generation 3D transistors. Broadwell, the first product on 14 nanometers is up and running as we demonstrated at the Intel Developer Forum last month.

While we are comfortable with where we are at with yields, from a timing standpoint, we are about a quarter behind our projections. As a result, we are now planning to begin production in the first quarter of next year.

BK claiming things are all fixed now, push out by 1 quarter.

Sure. It's absolutely not the latter. It was simply a defect density issue. This was on the issue -- as we develop these technologies, what you are doing? You are continually improving the defect densities and those resulted in the yield, the number of die per wafer that you get out of the product and what happened as you insert a set of fixes in groups, you will put four or five, maybe sometimes six or seven fixes into a process and group it together, run it through and you will expect an improvement rate occasionally as you go through that. The fixes don't deliver all of the improvements [stock], we had one of those.


Why do I have confidence? Because, we have got back now and added additional fixes, gotten back onto that curve, so we have confidence that the problem is fixed, because we have actually data and defects and so that gives us the confidence that we are to keep moving forward now and that happens sometimes in these development phases like this, so that's why we are going to over it a quarter.

Previous proven to have been completely incorrect, 14nm PRQ one quarter after originally delayed plan, two quarters after claimed. This product PRQ was Core m only at frequencies well below target, only fixed in products that made it to market late in 2014/early 2015:

qWgchSH.png
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
Yeap. 10nm for anything other than ultra low power won't come until 2019, IMHO.

Man, with the delay in EUV, 14nm and below are sucking pretty hard. Wouldn't want to be a process engineer at Intel right now - the OT must suck as well.
 

KompuKare

Golden Member
Jul 28, 2009
1,228
1,597
136
GT3e/GT4e + eDRAM on desktop is dead on all current Intel roadmaps. Don't hold your breath waiting for that.
But if Intel were to feel some competition (i.e. if Zen performs well), a desktop unlocked i5/i7 with eDRAM is something they could (relatively) quickly do.
Although just looked at the i7 wiki pages and it seems so far there are no KL i7 quad mobiles with Iris Pro, whereas the Skylake i7-**70HQ have Iris Pro 580 which is the 72 EU part.
 

SAAA

Senior member
May 14, 2014
541
126
116
GT3e/GT4e + eDRAM on desktop is dead on all current Intel roadmaps. Don't hold your breath waiting for that.
Man what a shame: 6 core + eDRAM could have been a nice option against Zen/Zen+ on mainstream desktop. IGPs never amazed me, especially from a die size perspective, but the speedups that Broadwell got on some workloads with that large cache were impressive.