• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Skylake Core Configs and TDPs

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I'm talking about specific limitations, like the specific language that didn't allow Nvidia to do x86 translated Denver.

I was wondering if there was anything in there that forbid Intel from making dedicated GPUs.
Oh, sorry for the misunderstanding. I'm fairly certain there isn't such a limitation, but perhaps a more qualified member can comment.
 
They're gonna need to start fabbing a lot more eDRAM modules because those EUs scale like garbage past 20 without more memory bandwidth.
Look at HD5000 offering maybe 10% better real world performance than HD4400 despite having double the number of EUs:
http://www.notebookcheck.net/Intel-HD-Graphics-4400.91979.0.html
http://www.anandtech.com/show/7072/intel-hd-5000-vs-hd-4000-vs-hd-4400

Intel has a long way to go in GPU before they can even match PowerVR or Qualcomm.

15W is completely tdp limited. HD 5000 allows for small efficiency gains as they can run twice the number of EUs at a lower frequency.

That said, yes the scaling is generally quite poor.
 
Gen8 is a big redesign from what we know including big memory management changes and Skylake is Gen9. Gen7.5 benchmarks are invalid for Skylake.
 
Hmm we will see how that pans out. Bandwidth is a huge issue in igpu and I would like to see notebooks APUs with quad channel memory to alleviate this issue.


Intel EUs are at this point held back by a lack of eDRAM and to say the Gen7.5 are invalid is inaccurate because they are the most pertinent performance measurements available for intel HD graphics.


I expect the GT4e parts to be very impressive while the GT4 parts without eDRAM will probably offer the normal 10-20% boost from doubling the EUs.
 
They're gonna need to start fabbing a lot more eDRAM modules because those EUs scale like garbage past 20 without more memory bandwidth.



Look at HD5000 offering maybe 10% better real world performance than HD4400 despite having double the number of EUs:
http://www.notebookcheck.net/Intel-HD-Graphics-4400.91979.0.html
http://www.anandtech.com/show/7072/intel-hd-5000-vs-hd-4000-vs-hd-4400

Intel has a long way to go in GPU before they can even match PowerVR or Qualcomm.
You should use the same source for each product, if possible.

http://www.notebookcheck.net/Intel-HD-Graphics-4400.91979.0.html
http://www.notebookcheck.net/Intel-HD-Graphics-5000.91978.0.html

HD 5000 pulls much farther ahead here. Regardless, if you look at HD4400's page, you can see just how wide the performance gap can be between different models, even with the same GPU. Notebooks are notoriously difficult to pull suitable information from.
 
You should use the same source for each product, if possible.

http://www.notebookcheck.net/Intel-HD-Graphics-4400.91979.0.html
http://www.notebookcheck.net/Intel-HD-Graphics-5000.91978.0.html

HD 5000 pulls much farther ahead here. Regardless, if you look at HD4400's page, you can see just how wide the performance gap can be between different models, even with the same GPU. Notebooks are notoriously difficult to pull suitable information from.

Unfortunately notebook check only does synthetics and anandtechs benches of HD4000 vs HD5000 should actually give HD5000 a larger performance advantage over 4000 than 4400, however it still fails to offer more than 15% performance increase in even half the games tested. In tomb raider it offers 40% which shows that the EUs do work when they have the bandwidth.


If eDRAM is so costly from a transistor and yield aspect why not just design broadwell-U as quad channel? That would solve this issue of constantly having to double the size of a die to stuff eDRAM into it. Eventually they have to do it anyway.
 
Unfortunately notebook check only does synthetics
No they don't. Scroll down the page.
If eDRAM is so costly from a transistor and yield aspect why not just design broadwell-U as quad channel? That would solve this issue of constantly having to double the size of a die to stuff eDRAM into it. Eventually they have to do it anyway.
Quad channel has a decent system board cost and area impact. You're putting more workload on the OEM if they want to get the additional performance from the quad channel. It's basically what OEMs don't want.

eDRAM keeps all that memory traffic on package. You don't have to create more leads or PCB layers. It's more power efficient (I think?). It's also more expensive, though -- 77mm2 of 22nm die. That's half an Ivy Bridge (although it's probably less costly per mm2 -- still not cheap by any means).
 
Last edited:
No they don't. Scroll down the page.

Their page layout is horrendous.



I'm looking and the benchmarks seem to be the same as anandtech. The GT2e part (4600) is still offering anywhere from 4-40% better performance than HD5000 and the 4400 is ranging from maybe 4% faster to about 40% slower than HD5000.


Could you show me where it's performing "better" than on the anandtech bench?


here are a few benchmarks HD5000 vs 4400

Theif 2 HD5000 is -4%
BF4 HD5000 is + 21%
Rome 2 is + 21%
MLL is a good one at 39%
Bioshock at + 14%
Hitman: Absolution is +11%


This doesn't look any better than the anandtech bench. i'm not even cherry picking just randomly clicking compare. It is faster, but with 2x the EUs it should offer at least 40% more performance.
 
Their page layout is horrendous.



I'm looking and the benchmarks seem to be the same as anandtech. The GT2e part (4600) is still offering anywhere from 4-40% better performance than HD5000 and the 4400 is ranging from maybe 4% faster to about 40% slower than HD5000.


Could you show me where it's performing "better" than on the anandtech bench?


here are a few benchmarks HD5000 vs 4400

Theif 2 HD5000 is -4%
BF4 HD5000 is + 21%
Rome 2 is + 21%
MLL is a good one at 39%
Bioshock at + 14%
Hitman: Absolution is +11%


This doesn't look any better than the anandtech bench. i'm not even cherry picking just randomly clicking compare. It is faster, but with 2x the EUs it should offer at least 40% more performance.
I wasn't really trying to dispute your numbers, despite what I said, for what's it's worth. I'm at work and half asleep... not going to do any math today.

At any rate, it's better than the 10% you stated earlier.

Regardless... notebooks will always be all over the place. You can see that in the HD4400 benchmarks... you get 100% performance deltas even with the same hardware.
 
Since Intel has a cross-licensing agreement with Nvidia, is there any inkling that Intel will try again with a dedicated desktop graphics card/compute card hybrid like they were going for with larrebee?

They definitely have the process advantage and once they ramp up 14nm fully they will have extra fab space to spare for such an adventure for the upper-mainstream to high-end consumer desktop dedicated graphics market segment.

Does anyone know if that cross-licensing agreement with Nvidia excluded that possibility explicitly anywhere like it does Nvidia's doing an x86 Project Denver?
Intel is officially not interested in dGPUs.
 
No they don't. Scroll down the page.

Quad channel has a decent system board cost and area impact. You're putting more workload on the OEM if they want to get the additional performance from the quad channel. It's basically what OEMs don't want.

eDRAM keeps all that memory traffic on package. You don't have to create more leads or PCB layers. It's more power efficient (I think?). It's also more expensive, though -- 77mm2 of 22nm die. That's half an Ivy Bridge (although it's probably less costly per mm2 -- still not cheap by any means).


That's for only 128MB also, try putting even a 1GB framebuffer (which, btw, would offer outstanding bandwidth) on eDRAM and you can barely fab the chip. They will have to go to 256MB soon anyway.


I get the whole PCB and power issues but eventually eDRAM will be untenable if they want to really compete with dGPU
 
They're gonna need to start fabbing a lot more eDRAM modules because those EUs scale like garbage past 20 without more memory bandwidth.



Look at HD5000 offering maybe 10% better real world performance than HD4400 despite having double the number of EUs:
http://www.notebookcheck.net/Intel-HD-Graphics-4400.91979.0.html
http://www.anandtech.com/show/7072/intel-hd-5000-vs-hd-4000-vs-hd-4400

Intel has a long way to go in GPU before they can even match PowerVR or Qualcomm.

Skylake could increase the L3 cache, shared with the IGP, that should help a lot looking what just 2MB did for the GT 750. Granted Intel has no Nvidia level of expertise still it seems the most logical move before including expensive quad channel or a large (pricey) eDRAM pool.
 
Skylake could increase the L3 cache, shared with the IGP, that should help a lot looking what just 2MB did for the GT 750. Granted Intel has no Nvidia level of expertise still it seems the most logical move before including expensive quad channel or a large (pricey) eDRAM pool.

L3 cache is very expensive isn't it? Also, putting extra L3 cache would put the GT4e skus in a weird place of having more cache than the best Xeons (per core). and more cache than their socket 2011 counterparts.



I think quad channel is really the answer as it also increases CPU performance greatly. This eDRAM will never achieve performance parity with 512 bit GDDR5 and better interfaces that dGPUs have today. It offers more bandwidth but at the cost of not having nearly enough frame buffer.
 
Intel is officially not interested in dGPUs.
For now, at least. Might be a natural consequence of scaling up their designs. When you're pumping more money into your drivers, your uarch, your dev relations... it starts making more and more sense. They've got a nice process lead, but they're obviously behind on their architecture.

Larrabee, in part (or mostly?), failed because Intel had a terrible base to scale up from. They historically had poor drivers, poor image quality, poor performance... and suddenly, they wanted to scale the garbage up and expected to be competitive. But they acknowledged that there's huge potential to capture market share in the growing, lucrative HPC market... so we'll see how Knight's Landing looks later this year.
That's for only 128MB also, try putting even a 1GB framebuffer (which, btw, would offer outstanding bandwidth) on eDRAM and you can barely fab the chip. They will have to go to 256MB soon anyway.


I get the whole PCB and power issues but eventually eDRAM will be untenable if they want to really compete with dGPU
Yep. But that's what stacked memory is for. This is an interim solution. AMD was supposed to use GDDR5m for Kaveri, but that didn't happen.
L3 cache is very expensive isn't it? Also, putting extra L3 cache would put the GT4e skus in a weird place of having more cache than the best Xeons (per core). and more cache than their socket 2011 counterparts.
It's 3x more expensive per bit. Undoubtedly that number is lower for equivalent performance, but yeah, it's not cheap.
I think quad channel is really the answer as it also increases CPU performance greatly.
I don't think so. They'll be okay for now with eDRAM. With Skylake, they'll get DDR4, which will hold them over until stacked memory comes down in price to be able to use it.
This eDRAM will never achieve performance parity with 512 bit GDDR5 and better interfaces that dGPUs have today. It offers more bandwidth but at the cost of not having nearly enough frame buffer.
eDRAM gets really close, actually.

Iris Pro:
25.6GB/s (DDR3) + 50GB/s (eDRAM) = 75.6 GB/s

GT 650M:
80.3 GB/s

Of course with Haswell, the CPU uses that bandwidth too, but the gap still isn't that large.

Intel actually claims the following:
It would take a 100 - 130GB/s GDDR memory interface to deliver similar effective performance to Crystalwell since the latter is a cache.
 
Last edited:
For now, at least. Might be a natural consequence of scaling up their designs. When you're pumping more money into your drivers, your uarch, your dev relations... it starts making more and more sense. They've got a nice process lead, but they're obviously behind on their architecture.

Larrabee, in part (or mostly?), failed because Intel had a terrible base to scale up from. They historically had poor drivers, poor image quality, poor performance... and suddenly, they wanted to scale the garbage up and expected to be competitive. But they acknowledged that there's huge potential to capture market share in the growing, lucrative HPC market... so we'll see how Knight's Landing looks later this year.

Intel is already investing gobs into GPUs -- it's no longer a "side project", it's required to grow content share in PCs & do well in mobile.

The dGPU market isn't that big, but if Intel wanted to try to put NV and AMD out of business then it could probably brute force it over the next decade.

Or it could save itself the hassle and just buy NV 😛
 
Just make a GT8 or a GT16 or something.

Intel will really own if they somehow magically produced a GT16 IGP. A Broadwell GT16 has on the order of 10PFLOPS, which is 3 orders of magnitude more than GTX Titan Black.

I wonder if they'll be able to fit that in a 85W TDP.

Anyway, at the current rate, you'll see a GT8 SKU on the 3nm node (50TFLOPS).
 
Intel will really own if they somehow magically produced a GT16 IGP. A Broadwell GT16 has on the order of 10PFLOPS, which is 3 orders of magnitude more than GTX Titan Black.

I wonder if they'll be able to fit that in a 85W TDP.

Anyway, at the current rate, you'll see a GT8 SKU on the 3nm node (50TFLOPS).

3nm?? I though 5nm was the limit unless they come up with some magical new material.
 
They're gonna need to start fabbing a lot more eDRAM modules because those EUs scale like garbage past 20 without more memory bandwidth.



Look at HD5000 offering maybe 10% better real world performance than HD4400 despite having double the number of EUs:
http://www.notebookcheck.net/Intel-HD-Graphics-4400.91979.0.html
http://www.anandtech.com/show/7072/intel-hd-5000-vs-hd-4000-vs-hd-4400

Intel has a long way to go in GPU before they can even match PowerVR or Qualcomm.

You don't understand what Apple's doing. Haswell-U IGPs are thermally constraint. More EUs = lower clock speeds = lower voltage = higher performance/watt = higher performance at same TDP.
 
if Intel wanted to try to put NV and AMD out of business then it could probably brute force it over the next decade.

This is an interesting statement considering intels history of trying to put both nv and amd out of business unsuccessfully, while at the same time almost being put out of business by AMD64.



NV and AMD have nothing to fear from intel until it can fabricate an igpu that can provide comparable performance in games to a midrange dgpu. That's a long way away.


Besides, NV and AMD are both licensed ARM manufacturers and unless Sony and MS go down simultaneously (which, btw, would hurt intel immensely w MS) AMD isn't going anywhere.


You're basically suggesting intel is capable of taking over the entire semiconductor industry on a whim. Yes?
 
This is an interesting statement considering intels history of trying to put both nv and amd out of business unsuccessfully, while at the same time almost being put out of business by AMD64.



NV and AMD have nothing to fear from intel until it can fabricate an igpu that can provide comparable performance in games to a midrange dgpu. That's a long way away.


Besides, NV and AMD are both licensed ARM manufacturers and unless Sony and MS go down simultaneously (which, btw, would hurt intel immensely w MS) AMD isn't going anywhere.


You're basically suggesting intel is capable of taking over the entire semiconductor industry on a whim. Yes?

Intel is working AMD over pretty good...

vanquishedAMD.jpg
 
Intel is working AMD over pretty good...

vanquishedAMD.jpg

Are you suggesting they are going out of business, or even close to it? That would be a pretty comical statement, but not surprising in the light of your comments about intel.





And what about nv? Where is the evidence of intels strange hold over a dGPU manufacturer?
 
Intel is already investing gobs into GPUs -- it's no longer a "side project", it's required to grow content share in PCs & do well in mobile.

The dGPU market isn't that big, but if Intel wanted to try to put NV and AMD out of business then it could probably brute force it over the next decade.

Or it could save itself the hassle and just buy NV 😛

In a decade there may not even be dGPUs with the current rate.
 
3nm?? I though 5nm was the limit unless they come up with some magical new material.

TL;DR: No one knows the limit.

Example of research done on the subject, Wikipedia: "In 2008, transistors one atom thick and ten atoms wide were made by UK researchers. They were carved from graphene, predicted by some to one day oust silicon as the basis of future computing. Graphene is a material made from flat sheets of carbon in a honeycomb arrangement, and is a leading contender. A team at the University of Manchester, UK, used it to make some of the smallest transistors ever: devices only 1 nm across that contain just a few carbon rings."
 
Hmm wow... 10 _P_FLOPS? Nice upgrade, now just tell me were I can buy this! But... the scaling may make it improve only 13.632% over 20EUs... we need moar stacked memory too!
 
NV and AMD have nothing to fear from intel until it can fabricate an igpu that can provide comparable performance in games to a midrange dgpu. That's a long way away.

The numbers tells a different story.

JPR_Graphics_chip_market_Q1_2014.jpg


nVidia is busy moving to Tegra to give the company a revenue post dGPUs. And AMD is desperately trying to go embedded to avoid being dragged down as well.
 
Last edited:
Back
Top