Question about the last Haswell article

tipoo

Senior member
Oct 4, 2012
245
7
81
http://www.anandtech.com/show/6355/intels-haswell-architecture/12


On this page, it seems to indicate Haswells GT3 GPU performance WITH the optional eDRAM cache will be double that of the HD4000. Does that sound wrong to anyone else? GT3 has well over double the EUs of the HD4000, combined with architecture improvements that should have brought it to over double the GT4000 performance already with the cache pushing it beyond that, but the article seems to indicate otherwise, that without the cache the 40 EU GPU would be even less remarkable at under double the performance. Do we know anything about this? It doesn't seem to make sense unless it's clocked way lower to save power.
 

Yuriman

Diamond Member
Jun 25, 2004
5,530
141
106
Only thing I can think of is that AMD's current APUs scale almost linearly with memory speed. Doubling the core performance would do little on one of those, so I infer that Intel's would be similar.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
It could also be a case of lowering clock speeds while bumping up EUs as well?

The GT3 eDRAM SKU isn't being offered for the Ultrabook models. So it sounds like it really needs the TDP headroom to stretch its legs.
 

tipoo

Senior member
Oct 4, 2012
245
7
81
Only thing I can think of is that AMD's current APUs scale almost linearly with memory speed. Doubling the core performance would do little on one of those, so I infer that Intel's would be similar.

I'm just working on memory and could be wrong, but doesn't the HD4000 have more bandwidth but lesser performance than the AMD APUs anyways? Wouldn't it still have quite a bit more scaling headroom at the same bandwidth?
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
I'm just working on memory and could be wrong, but doesn't the HD4000 have more bandwidth but lesser performance than the AMD APUs anyways? Wouldn't it still have quite a bit more scaling headroom at the same bandwidth?

I believe the L3 tied to the GPU is only ~1.5MB.

Anand claimed a 30% bump in the Ivy's HD4000 at the lower end TDP.

Now to the things that Intel didn't let loose at IDF. Although originally an option for Ivy Bridge (but higher ups at Intel killed plans for it) was a GT3 part with some form of embedded DRAM. Rumor has it that Apple was the only customer who really demanded it at the time, and Intel wasn't willing to build a SKU just for Apple.

Haswell will do what Ivy Bridge didn't. You'll see a version of Haswell with up to 128MB of embedded DRAM, with a lot of bandwidth available between it and the core. Both the CPU and GPU will be able to access this embedded DRAM, although there are obvious implications for graphics.

Overall performance gains should be about 2x for GT3 (presumably with eDRAM) over HD 4000 in a high TDP part. In Ultrabooks those gains will be limited to around 30% max given the strict power limits.

It sounds like it's a matter of TDP headroom.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
GT3 SKU clocks significantly less than GT2, that's why. So you see about 25% improvement with GT2 over Ivy Bridge, and with GT3 you get 60% or so on top of that.
 

tipoo

Senior member
Oct 4, 2012
245
7
81
Well that's more than a bit of a letdown then...I was thinking double the performance of HD4000s in ultrabooks would be pretty damn good for ultrabooks, but the only systems getting that kind of boost will be desktops and very high wattage laptops with the higher end SKUs....The ones that would likely be paired with a better video card anyways.

I wonder if they could put a smaller amount of eDRAM on one of the lower watt mobile parts, like 24-32mb, to perform a similar function as the eDRAM daughter die in the xbox 360, speeding along the most bandwidth constrained GPU operations.
 

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
After reading this thread, Im starting to wonder if Haswell is just a onboard GPU upgrade.

We want speed and power. Talk about that more, cuz the gurus here don't use their CPU's GPU
 

Yuriman

Diamond Member
Jun 25, 2004
5,530
141
106
After reading this thread, Im starting to wonder if Haswell is just a onboard GPU upgrade.

We want speed and power. Talk about that more, cuz the gurus here don't use their CPU's GPU

I recommend reading the article linked in the first post, it details exactly what Haswell is. It seems to be more of a thrust into the ultra-portable space than a GPU upgrade, though the GPU is important to staying ahead of the competition even in that space.

AVX2 will also provide very significant performance improvements in some workloads.
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
GT3 SKU clocks significantly less than GT2, that's why. So you see about 25% improvement with GT2 over Ivy Bridge, and with GT3 you get 60% or so on top of that.

http://semiaccurate.com/2012/09/07/haswell-gt3-uses-shaders-to-save-power/


I haven't heard that anywhere else, but it passes the smell test. Would be neat if they offer some sort of 'high-power' mode to boost up the clocks, but of course you will always be limited by your cooling system in these ultraportables...
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Intel actually said something in similar degree at IDF when they introduced Haswell. They are moving the process to better fit that too with 22nm.

Remember how the PowerVR iGPUs in Intel Atom CPUs were clocked really high? How about that current Intel GenX iGPUs clock high too? Well that's because high frequency really fits well with their high performing process technology. But they'll go from high performance focus to perf/watt focus in the future.

So in the future they'll be really good at perf/watt, but it'll inevitably be slower than if they focused all on performance as before.

The GT3 situation is sort of similar to how Nvidia went from 1GHz+ shaders with 1x units with 700-800MHz frequencies but with 2-3x the shaders on Kepler.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
The GT3 situation is sort of similar to how Nvidia went from 1GHz+ shaders with 1x units with 700-800MHz frequencies but with 2-3x the shaders on Kepler.

Keplers shaders are more simple. Atleast on the GK104 and down.

Some did market shaders as doubleclocked tho.

b9614478-a347-41a7-b12b-a49fac857b0c.jpg
 
Last edited:

tipoo

Senior member
Oct 4, 2012
245
7
81
I wonder if the GPU portion could get over 2x the performance of the HD4000 in situations where the CPU cores weren't in play, so it had more TDP to work with.
 

Denithor

Diamond Member
Apr 11, 2004
6,298
23
81
I wonder if the GPU portion could get over 2x the performance of the HD4000 in situations where the CPU cores weren't in play, so it had more TDP to work with.

Almost certainly.

Which is exactly why I think Intel isn't putting GT3 into desktop chips - they would probably decimate Trinity and leave AMD with virtually no chips to sell.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Almost certainly.

Which is exactly why I think Intel isn't putting GT3 into desktop chips - they would probably decimate Trinity and leave AMD with virtually no chips to sell.

On a desktop you can always add a discrete card too. And you will if you want to use graphics. laptops, ultrabooks etc is abit harder :p
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
I wonder if the GPU portion could get over 2x the performance of the HD4000 in situations where the CPU cores weren't in play, so it had more TDP to work with.

I highly doubt it. I wish it could be better too, but that's not likely. They've been quoted as saying something in the lines of "We could get six times faster integrated graphics if we wanted to, but the focus on power efficiency means that won't happen".

The fact is they are saying 2x gains for the quad core variants, if Anandtech's claims are really from Intel. Quad core 45W parts are almost not TDP constrained.

http://www.anandtech.com/show/5878/...ation-realtime-igpu-clocks-on-ulv-vs-quadcore

For the N56VM/i7-3720QM, the results for Diablo III are: 30.27W package, 12.28W CPU, 13.43W iGPU (and maximum package power of 34.27W). DiRT 3 gives results of 32.06W package, 13.2W CPU, and 14.8W iGPU (max power of 38.4W).
You can see that its not even hitting TDP, and the CPU cores are using less than 15W. So its not like there's much to cut off from the CPU. If you do, you may see a situation similar to Trinity where its getting bottlenecked by the CPU.

Often, we see cases where manufacturer claims are with special case scenarios and common case usage gains happen to be less. I wouldn't doubt 2x claims are false, but to expect much is akin to hype.
 
Last edited: