• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[Softpedia] 2014-Bound Intel Broadwell-K CPUs Get 80% Graphics Boost from Iris Pro

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I'm assuming the reduction of L3 cache to 6MB is offset by the added L4 cache?

I did not want to wait until late 2014 for a new build, and Skylake is then just around the corner...


Yes it is. They cut 2MB out to make room for edram connectivity inside the die.
 
Hmm, I thought Intel would increase the size of of the eDRAM LLC on Broadwell; maybe the simulations didn't show a sufficient gain in performance (or maybe a regression in some apps). I think they are going to have to do something by the time Skylake comes along. Interesting times as Intel heads into 14nm.
 
+80% makes only sense over Iris Pro on Haswell. If Broadwell GT3 features 48 EUs then there must be some serious improvements made on Gen8. I wonder if the GT4 is still planned.
The article is a bit unclear, however, I don't think Intel will do an 80% improvement out of rumored 20% more shaders without some major trickery and/or redesign.

Imho the most logical comparison for a Broadwell K-SKU is the Haswell K-SKU. +80% from there seems like a straightforward GT3e implementation, no trickery needed to be believable.
 
Imho the most logical comparison for a Broadwell K-SKU is the Haswell K-SKU. +80% from there seems like a straightforward GT3e implementation, no trickery needed to be believable.

Thats also how I read it.

GT3e on desktop. GT4e on laptop I presume.
 
The article is a bit unclear, however, I don't think Intel will do an 80% improvement out of rumored 20% more shaders without some major trickery and/or redesign.

Gen8 is a major redesign.


Imho the most logical comparison for a Broadwell K-SKU is the Haswell K-SKU. +80% from there seems like a straightforward GT3e implementation, no trickery needed to be believable.


It is not because Haswell GT3e already is 80% faster than Haswell-K.
 
GPGPU. This will be a huge push for gaming because the XO and the PS4 is also designed to accelerate several data parallel workloads. It can be anything: pathfinding, physics, sorting, decompression, culling, or even some directly offloaded graphics workloads before the actual renderring (we may call it pre-render). There are many possibilities with architectural integration which is solve the latency and the copy overhead problem. This is why the developers so excited about the new consoles, but we need this functionality on PC as well.
i believe the software needs to be designed with this in mind for it to work, but if this gets implemented on the PC as well.. ok i guess it could be good. I'll still have my doubts until i see it at work however.
 
Iris Pro is using a completely different die. Physically there is only 6MB L3 available on this die.

6MB Printed on die is different than saying you only have 6MB accessible and having the 2MB of memory used for something that you don't see but still printed. Do you have die shots showing the removal of the 2MB of L3? Are you talking Haswell or Broadwell Iris Pro?
 
Last edited:
Gen8 is a major redesign.
What I meant with redesign really would be closer to throwing out your current GP design and adapting a completely different one. Which would in turn make the information of '48 execution units' worthless until we have something to compare it to.

And, really, I don't see the point for them to go for 80% more performance over Iris Pro on dual channel DDR3/early dual channel DDR4 for now. That would require a lot of hardware for dimishing returns, even with Crystalwell.

My second theory is downright trivial to implement compared to that.
 
6MB Printed on die is different than saying you only have 6MB accessible and having the 2MB of memory used for something that you don't see but still printed. Do you have die shots showing the removal of the 2MB of L3? Are you talking Haswell or Broadwell Iris Pro?

I never said there is 6MB accessible and 2MB of memory used for something. The GT3e die has 1.5MB allocated to each core resulting in overall 6MB.


What I meant with redesign really would be closer to throwing out your current GP design and adapting a completely different one. Which would in turn make the information of '48 execution units' worthless until we have something to compare it to.

And, really, I don't see the point for them to go for 80% more performance over Iris Pro on dual channel DDR3/early dual channel DDR4 for now. That would require a lot of hardware for dimishing returns, even with Crystalwell.

My second theory is downright trivial to implement compared to that.

So you are saying that GT3e on Gen8 doesn't bring any improvements over curent Gen7.5 GT3e? This is a highly unrealistic claim to be honest.
 
Article is poorly edited / elaborated but i wish this is true. Just imagine gaming on a ultra thin / ultra light laptop while you're on a business trip?
 
So you are saying that GT3e on Gen8 doesn't bring any improvements over curent Gen7.5 GT3e? This is a highly unrealistic claim to be honest.
No. What I'm saying is that an architectural improvement of more than 50% 'out of thin air' (with another 30% improvement thanks to more shaders and potentially higher clocks) is quite a lot harder to reason than a straightforward implementation of GT3e for - what seems to be - a very limited SKU. Keep in mind, we're still talking about Broadwell K only.


edit, I feel like I have worded this badly.
My original argument was (and still stands) that giving Broadwell K 80% more performance in games over HD4600/Haswell K is believable, both from a technical and an economic standpoint. Giving it 80% more performance over HD5200/4770R may be possible with enough resources dedicated, but it doesn't seem like something that is economic.
 
Last edited:
No. What I'm saying is that an architectural improvement of more than 50% 'out of thin air' (with another 30% improvement thanks to more shaders and potentially higher clocks) is quite a lot harder to reason than a straightforward implementation of GT3e for - what seems to be - a very limited SKU. Keep in mind, we're still talking about Broadwell K only.


You are claiming that Broadwell-K using a Gen7.5 GT3e?
 
edit, I feel like I have worded this badly.
My original argument was (and still stands) that giving Broadwell K 80% more performance in games over HD4600/Haswell K is believable, both from a technical and an economical standpoint. Giving it 80% more performance over HD5200/4770R may be possible with enough resources dedicated, but it doesn't seem like something that is economical.

Exactly my thoughts. Ivy Bridge GPU was heralded was a big jump but only brought 40% gain(with 2x Flops increase).

GT3 is only about 50-60% faster than GT2, so extra 20% gain for Broadwell architecture going from 40 to 48 might have resulted in 80% gain over Haswell GT2.

Remember how previous Intel comparisons went. They said Ivy Bridge GT2 was 3x the speed of Sandy Bridge GT1. Haswell GT2 was 3x the speed of Ivy Bridge GT1.
 
Just out of curiosity, how much die space would adding 32mb of eSRAM (whatever L4 cache is) to Kaveri require?
 
Eh, what exactly the 80% claim is comparing against also depends heavily upon the games used for that claim. If it's talking about a game where previous architectures are encountering a bottleneck that's fixed in Broadwell then it's quite easy to believe that it's being compared against Haswell GT3e. If it's talking about an 80% gain on 3dmark though... eh, who knows?

But there's no question that Haswell still has at least one potential bottleneck, with the 3dmark synthetics - http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/15 - making it quite obvious what it is. Assuming that Intel addresses such in Broadwell it's quite possible that some games will see a marked performance increase.

Though there's also the possibility that there are bottlenecks occurring elsewhere in the design that result in Haswell GT2 -> GT3 not demonstrating as much of a performance improvement as a doubling of resources should. And again there's a pretty decent chance that such would be addressed in Broadwell. Have to remember that we've not yet seen a new graphics architecture out of Intel since Sandybridge really. Sure Ivybridge was labeled Gen7 versus Sandybridge's Gen6, but all the materials I've seen have Gen7 as little more than an evolution of Gen6. Same goes for Haswell's Gen7.5 - sure there are a number of improvements but it's basically the same thing. Whereas all the hints thus far are that Gen8 is a major overhaul, and that gives it a lot of potential.
 
Just out of curiosity, how much die space would adding 32mb of eSRAM (whatever L4 cache is) to Kaveri require?

Lots. Adding 32MB eSRAM to the XBox One APU made it bigger than the PS4 APU, despite the fact that the PS4 has 6 extra CUs in its GPU.

Of course, AMD needs to buy as many wafers as possible from GloFo... so maybe blowing up the die size of its APUs would be a good idea.
 
Lots. Adding 32MB eSRAM to the XBox One APU made it bigger than the PS4 APU, despite the fact that the PS4 has 6 extra CUs in its GPU.

Of course, AMD needs to buy as many wafers as possible from GloFo... so maybe blowing up the die size of its APUs would be a good idea.

I actually tought they would go balls to the wall with a fat ass die on GloFo to make up for the WSA agreement. They have to pay them regardless, so it wouldnt be a bad idea.
 
Sure Ivybridge was labeled Gen7 versus Sandybridge's Gen6, but all the materials I've seen have Gen7 as little more than an evolution of Gen6.

Umm, the 16 EU Ivy Bridge has 2x the Flops of 12 EU Sandy Bridge.

Also the "big jump" with Broadwell they are claiming they contrast with Gen 3 with Gen 4 jump. Performance-wise, that was horrendous. GMA X3000 was in no way a "big jump" over GMA 950.
 
Back
Top