• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Why does Intel keep "CRIPPLING" their NUCs maximum memory speeds?

TheDarkKnight

Senior member
I'm kicking around the idea of buying a NUC. The Hades Canyon is out of the question because of it's price. So, I'm looking at a 7th-Gen Kaby Lake based NUC. It looks like the maximum memory speeds of these NUCs are @ 2133MHz. Why does Intel keep memory speeds for these NUCs so low when they are constantly releasing improved HD Graphics at the same time? The maximum memory bandwidth for these NUCs would be about 2133M x 8 x 2 (dual-channel) ~= 34.1 GB/s.

I'm thinking the average memory speeds for DDR4 should be 3200MHz at this point in the lifetime of DDR4 memory. It seems the most popular and ubiquitous in terms of offerings on retailer websites. If these NUCs supported that speed the memory bandwidth would go up to about 51.2 GB/s.

From a gamers perspective (mostly) wouldn't this increase the number of FPS in most games resulting in a much more enjoyable gaming experience? I realize they (Intel) want everybody to buy their Hades Canyon NUCs but something just seems silly here. If you want that to begin with why put the more powerful iGPU's in to begin with, such as the HD Iris Graphics 640? Unless the HD Iris Graphics 640 aren't powerful enough to require more memory bandwidth. But I honestly don't think that's the case here. So, it's a self-limiting configuration unless I'm very wrong.

This is a serious question I want to understand. So, if anyone has some good insight into this I would like to hear it. Maybe the HD Iris Graphics 640 isn't powerful enough to require the extra memory bandwidth of faster memory?

EDIT: If the Graphics chipset can't saturate the available memory bandwidth due to technological limitations then that's a justifiable reason for limiting the memory speeds. But I see no limitations from a technological standpoint on allowing faster memory speeds to be used other than the possibility of increasing the power requirements of the NUC itself.

EDIT #2: It seems the Core i5-7260U (which uses Iris 640 Plus Graphics) is limited to 2133MHz itself. So maybe this is a limitation of the CPU itself so that it doesn't burn up. But even if that's true it leads back to the question of why Intel would pair up such a powerful iGPU with a CPU that holds it's performance back by so much.
 
Last edited:
Good enough. So why put the very powerful Iris Plus 640 Graphics on the CPU over the HD 630 Graphics? I'm sure there's a good reason here (hoping rather) due to CPU yields or designs. I believe what your saying but the full reason is still not crystal clear.
 
I'm kicking around the idea of buying a NUC. The Hades Canyon is out of the question because of it's price. So, I'm looking at a 7th-Gen Kaby Lake based NUC. It looks like the maximum memory speeds of these NUCs are @ 2133MHz. Why does Intel keep memory speeds for these NUCs so low when they are constantly releasing improved HD Graphics at the same time? The maximum memory bandwidth for these NUCs would be about 2133M x 8 x 2 (dual-channel) ~= 34.1 GB/s.

I'm thinking the average memory speeds for DDR4 should be 3200MHz at this point in the lifetime of DDR4 memory. It seems the most popular and ubiquitous in terms of offerings on retailer websites. If these NUCs supported that speed the memory bandwidth would go up to about 51.2 GB/s.

From a gamers perspective (mostly) wouldn't this increase the number of FPS in most games resulting in a much more enjoyable gaming experience? I realize they (Intel) want everybody to buy their Hades Canyon NUCs but something just seems silly here. If you want that to begin with why put the more powerful iGPU's in to begin with, such as the HD Iris Graphics 640? Unless the HD Iris Graphics 640 aren't powerful enough to require more memory bandwidth. But I honestly don't think that's the case here. So, it's a self-limiting configuration unless I'm very wrong.

This is a serious question I want to understand. So, if anyone has some good insight into this I would like to hear it. Maybe the HD Iris Graphics 640 isn't powerful enough to require the extra memory bandwidth of faster memory?

EDIT: If the Graphics chipset can't saturate the available memory bandwidth due to technological limitations then that's a justifiable reason for limiting the memory speeds. But I see no limitations from a technological standpoint on allowing faster memory speeds to be used other than the possibility of increasing the power requirements of the NUC itself.

EDIT #2: It seems the Core i5-7260U (which uses Iris 640 Plus Graphics) is limited to 2133MHz itself. So maybe this is a limitation of the CPU itself so that it doesn't burn up. But even if that's true it leads back to the question of why Intel would pair up such a powerful iGPU with a CPU that holds it's performance back by so much.

I specialize in mini computers. There's a lot of factors that go into the design of these compact NUC-style machines. Thermal issues (heat) is a big factor. You could fry an egg on top of some of the early models, and the way some of them dealt with the heat (like Gigabyte) was to simply crank the fan up to crazy-whiny levels, which is no fun in a quiet office. My favorite design at the moment is the HP Z2 Mini G3. They overcome the power issue by using a giant laptop-style power cord (I believe it has a 200-watt rating) & then have a dual-fan cooling system that is very quiet, even under load. Downside is like the Hades Canyon, even the base model ain't cheap. However, the Hades Canyon represents a HUGE leap forward in integrated graphics...they are roughly equivalent to a GTX970, which is amazing because a GTX970 card is roughly the size of the entire Hades Canyon computer. So right now, it's just a waiting game...the onboard GPU's are only going to get more powerful & will continue to drop in price, and the latest dedicated GPU's like the 1080 Ti are so far ahead it's not even funny, so I think we'll kind of see the market split into Intel + GPU and then high-end dedicated GPU for really intense gaming setups, like for high-end VR rigs or 144hz setups.

HP%20Z2%20Mini_internal%20look_HR.jpg
 
Simply put, the faster something runs the more power it requires and the more heat it needs to dissipate. Small form factor computers by nature have a smaller thermal budget. Both Intel and AMD sell chips that have a configurable TDP. You can put a chip in a tiny box and get reduced performance or you can take the same exact chip and put it in a bigger box with a bigger cooler and get higher performance. The chip isn't the issue, the thermal solution is the issue. Same goes for all the parts you try to cram into a tiny box, they all need to do their part to reduce heat or a bigger box with a better heat-sink and fans will be required.
 
Last edited:
Good enough. So why put the very powerful Iris Plus 640 Graphics on the CPU over the HD 630 Graphics?

There's minimal impact of system memory performance on the eDRAM equipped parts: https://www.anandtech.com/show/10602/memory-frequency-scaling-on-skull-canyon/5

The gains in gaming using the Iris Pro 580 are 1-5%. The eDRAM is supplying most of memory bandwidth needs. This is a much more power CPU/GPU with 45W TDP.

You are talking about a dual core with 15W TDP limit and smaller iGPU configuration using essentially the same eDRAM. It's unlikely you'll see any difference from faster memory. I think you'd see more gains if it was the eDRAM lacking parts, but even then with Gen 9 the memory bandwidth requirements aren't too high for the GT2, 24EU part.

15W laptop parts also prioritize power consumption for system memory. You'll see it still supports LPDDR3, because LPDDR3 lowers standby power consumption vastly, and active power by a decent amount over regular DDR4. Once LPDDR4 support is baked in silicon, you should see faster speeds supported.
 
Back
Top