• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Broadwell GT3 48EUs? TDP range 4.5W-47W

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
The target isn't trying to match dGPU's outright. 75w total power budget for the iGPU perhaps long term so you'll need a lot of process advantage to get to mid range and won't get to high range.

The first performance target to be really significant is matching the next gen consoles for gaming at 1900 * 1080. Do that and there's very little reason for someone who doesn't have a bigger monitor to have a dGPU. There simply won't be games using notably more graphics power than the consoles can provide.
(Unless of course you're unusually obsessed with image quality options, which isn't a mainstream worry.).

That'll punch a huge hole in the market. Not sure when it'll happen of course.
 
Broadwell GT3 with eDRAM should already be ready to offer console performance. And GT4 way above it. GT2 might be close to desktop Kaveri in raw gflops. And unless AMD pulls a miracle, they are completely gone in the mobile GPU performance. Since they really have had large performance penalties there with the previous generations.
 
Last edited:
The trend towards igpu development instead of selecting dgpu have been slower than i thought. I think one reason is amd and nv just removed the lowest end eg the 80 shader stuff as a normal low end dgpu option. The production cost difference from dgpu 80 to eg. 320 is minimal. In that way they stalled the development for 2-3 years. But i think they are at the end of the line here now. Igpu will move forward now and lessen the difference.

I can say from experience Intel gfx performance is still inconsistent and a far cry from the synthetic bm results. Do note that when you compare. Use actual games and less known games. I would take a nv 640 over some 40-50 eu cached solution if its video or gaming performance its about. Its about consistent driver quality. Add Intel have a history of abandon their older gfx leaving the customers with non functioning or buggy drivers.

Everytime we are promised its fixed next time. I dont care about that talk anymore. They have to deliver.
 
I think you're all correct, honestly. Homeles is correct in his statements, but so are the rest of you guys. The final decision will be market driven.

I remember reading PC Gamer 16 years ago, you know, back when it wasn't just a few pages of content but practically a novel, and they stated that eventually GPUs would be incorporated into CPU tech. Now, being very young at the time, I was floored. I couldn't fathom having that "fun" taken from me of being able to swap one component out for another. But, over time, that prediction eventually came true, and to be honest, I think we all like it whether or not we like to agree.

The reason why I state the market will ultimately decide the fate of dGPUs is because it will boil down to whether producing them will be profitable or not. This wouldn't be an issue for AMD given how scalable GPU tech is, but they will need to increase their CPU sales in order for the economics to work. Nvidia will be hurt, and they've already seen it coming. That's why there's such a big push for Tegra. Intel won't produce a dGPU most likely.

So here's what is happening and will happen:

1. iGPUs reduce the available market for dGPUs. I think we can all agree on that.
2. Smaller market means less profitability for dGPUs.
3. AMD and Nvidia are in a position where they need to find new markets.

Note that in the past, dGPUs for gaming purposes existed because of the revenue generated from what is now the iGPU market. The sales of the bottom end fueled the R&D for the top, which turned around and fueled the sales of the bottom.

It all boils down to the growth of new markets.
 
On the desktop today, the price premium for GT3e over a GT2 is around 70-80$.

are you sure?

i5-4670R
base clock 3000MHz, turbo up to 3.7
i5 4570 base clock = 3200Mhz, turbo up to 3.6GHz

there is also the cache difference, but overall it doesn't seem relevant for CPU performance, and the 4570 have larger l3 cache.

the main difference is the IGP, GT2 vs GT3 and the price is 192 vs 310...
you can buy over 2x the Iris Pro performance for this difference I think....

and the biggest thing is the fact GT3 is bga only, which makes it even more expensive for the consumer...

GT3 for desktops is simply irrelevant.
 
are you sure?

i5-4670R
base clock 3000MHz, turbo up to 3.7
i5 4570 base clock = 3200Mhz, turbo up to 3.6GHz

there is also the cache difference, but overall it doesn't seem relevant for CPU performance, and the 4570 have larger l3 cache.

the main difference is the IGP, GT2 vs GT3 and the price is 192 vs 310...
you can buy over 2x the Iris Pro performance for this difference I think....

and the biggest thing is the fact GT3 is bga only, which makes it even more expensive for the consumer...

GT3 for desktops is simply irrelevant.

OEMs dont pay list prices. Neither does consumers if they dont wish to. (Microcenter being the obvious exmaple.)

Also the GT3e parts are 65W. So at best they can be somewhat compared to the S models.

In terms of cache, remember they also got a 128MB L4 cache.

The desktop GT3e is used in iMacs and Gigabyte Brix for example.
 
Last edited:
OEMs dont pay list prices. Neither does consumers if they dont wish to. (Microcenter being the obvious exmaple.)

The desktop GT3e is used in iMacs and Gigabyte Brix for example.

Apple is a different world, and how relevant is this specific "Brix" anyway?

GT3e is basically nowhere when it comes to desktops.
and it's easy to understand why,
 
Broadwell GT3 with eDRAM should already be ready to offer console performance. And GT4 way above it. GT2 might be close to desktop Kaveri in raw gflops. And unless AMD pulls a miracle, they are completely gone in the mobile GPU performance. Since they really have had large performance penalties there with the previous generations.

Wasn't it the GT4 that should be around 2 TFlops, thereby matching the 1.84 TFlops PS4? If so I assume the GT3 will be less than that.
 
Yes, you should be careful to mention what next gen console your're comparing to. They're really not the same performance-wise.

Console performance doesnt exclude the Xbox One(Or Wii U for that matter.). Its simply you overreading the statement.

And unlike the consoles, the GT3 is not dragged down by a slow CPU.
 
Apple is a different world, and how relevant is this specific "Brix" anyway?

GT3e is basically nowhere when it comes to desktops.
and it's easy to understand why,

Yeah, it's easy to understand that as a BGA part it isn't going to be ubiquitous like LGA parts are. It is designed for high end mobile SKUs (such as the Macbook Pro), super small form factors, and all in ones. Therefore it is BGA form factor, and wont' be widespread in desktops. If intel wanted it to be LGA , they would make it LGA. But desktop is a diminishing market, intel is focused on the market that is growing, not the one that is shrinking. Apparently, being that AMD's mobile APUs are castrated in comparison to their LGA counterparts (and perform poorly as a result), AMD is focused on the shrinking market. Instead of the market that matters (mobile).

I guess the macbook pro, the highest selling premium portable computer, is nowhere to be found.
 
Last edited:
Console performance doesnt exclude the Xbox One(Or Wii U for that matter.). Its simply you overreading the statement.

And unlike the consoles, the GT3 is not dragged down by a slow CPU.

If any console is a valid comparison, then you might as well include the Atari 2600 too. Hopefully even the GT2 might beat that one. 😉
 
Yeah, it's easy to understand that as a BGA part it isn't going to be ubiquitous like LGA parts are. It is designed for high end mobile SKUs (such as the Macbook Pro), super small form factors, and all in ones. Therefore it is BGA form factor, and wont' be widespread in desktops. If intel wanted it to be LGA , they would make it LGA. But desktop is a diminishing market, intel is focused on the market that is growing, not the one that is shrinking. Apparently, being that AMD's mobile APUs are castrated in comparison to their LGA counterparts (and perform poorly as a result), AMD is focused on the shrinking market. Instead of the market that matters (mobile).

I guess the macbook pro, the highest selling premium portable computer, is nowhere to be found.

Its actually not possible to make it LGA1150 on 22nm due to diesize of the CPU on the package. Broadwell/Skylake however should make it possible.

Haswell-Iris-Pro.jpg
 
I hope that someone will sell Broadwell GT4e on a mini-ITX motherboard- either soldered on, or in an LGA socket. It sounds like the perfect part for a "console replacement" HTPC. The complete lack of Haswell GT3e for custom builds has been seriously disappointing.
 
If TSMC 28nm is equal to Intel 22nm in density, then 20nm TSMC that will have almost 2x density over 28nm will be on par with Intels 14nm.
Except it's not. Intel's 22nm process is actually considerably denser.

Intel's 22nm process has a minimum cell size of 0.092μm². TSMC's has a minimum cell size of 0.130μm². If I'm doing the math correcty, Intel's 22nm proces is 41% denser.

Source

High performance logic is actually even more in Intel's favor: it's 48% denser (.108μm² vs .16μm²). [1][2]

You are right though -- Intel's GPU isn't very good for its density. But that will likely change with Broadwell. 😉
I think you're all correct, honestly. Homeles is correct in his statements, but so are the rest of you guys. The final decision will be market driven.

...

So here's what is happening and will happen:

1. iGPUs reduce the available market for dGPUs. I think we can all agree on that.
2. Smaller market means less profitability for dGPUs.
3. AMD and Nvidia are in a position where they need to find new markets.
Exactly. I do think the scales are tipped towards the iGPU side of things. If the dGPU guys find a "new" reason to sell their dGPUs, then they'll prolong the dGPU's usefulness.
 
Last edited:
What about the clock frequency of iGPU vs dGPU? Is there an advantage and if so is it a design choice or result of better transistors and process advantage?

For reference I remember reading "For the lulz... i5-4670k + HD4600 gaming performance" where BallaTheFeared is able to hit a 1.75GHz overclock with some impressive results. This is significantly higher than standard dGPU frequencies and overclocks, even considering the fact that it's probably in the top 1% of chips.
 
That fact that there is a serious conversation about the iGPU's becoming a threat to dGPU's is in itself kind of surprising to me. It definitely does seem like we will be, or are already seeing a shift in the market.

Iris Pro with 40 EU's seems to perform at almost the level of GT640 with 384 Cuda cores, or perhaps somewhat below an AMD 7750 with 540 stream processors. So we can roughly approximate EU's to Cuda cores and AMD stream processors by multiplying EU's by 10. 40 EU's for Iris Pro is "about" 400 Cuda Cores/AMD Stream Processors.

If Broadwell GT4 has 96 EU's that's about 960 Stream Processors or around AMD 7850 or so performance. Certainly not a thread to the extreme high end with 2500+ stream processors but it is competitive in the low end and something for AMD and nVidia to be concerned about. Lots of games are very playable with a 7850.

I think one important question is how much processing power will games require going forward? If things seems to be leveling out then AMD and nVidia may be in trouble sooner rather than later. But if game demands continue to rise then they may have a little breathing room while the iGPU's continue to play catch up.

Also, up to this point Intel has shown little to no interest in selling GT3 with desktop parts. This may be because they see little demand in the desktop space and even less in the future though.
 
Last edited:
What about the clock frequency of iGPU vs dGPU? Is there an advantage and if so is it a design choice or result of better transistors and process advantage?

For reference I remember reading "For the lulz... i5-4670k + HD4600 gaming performance" where BallaTheFeared is able to hit a 1.75GHz overclock with some impressive results. This is significantly higher than standard dGPU frequencies and overclocks, even considering the fact that it's probably in the top 1% of chips.

It was also a much simpler architechture...but the perfomance and image quality....did not a proper GPU 😉
 
What about the clock frequency of iGPU vs dGPU? Is there an advantage and if so is it a design choice or result of better transistors and process advantage?
Both, but mostly design choice. It's all governed by TDP. In this case, dGPUs have the advantage because they don't have to focus on low power.

This advantage that dGPU makers have will decrease over time, since Intel's IGPs will have a massive process superiority. They'll be able to hit higher frequencies at a same level of power draw. Heck, Intel could scale up their IGP and chase after the discrete market if they wanted to, and they'd probably have a pretty competitive product at 14nm; it wouldn't fit in with their business model, though.

Intel's 14nm process has a pretty sizable performance advantage over their 22nm process. 10nm should be even an even larger improvement relative to 14nm, given that it's when new channel materials (SiGe, Ge, possibly III-V) are scheduled to drop in.
For reference I remember reading "For the lulz... i5-4670k + HD4600 gaming performance" where BallaTheFeared is able to hit a 1.75GHz overclock with some impressive results. This is significantly higher than standard dGPU frequencies and overclocks, even considering the fact that it's probably in the top 1% of chips.
What you're seeing there is Intel's process superiority. Intel can't run IGPs at those frequencies because of power consumption/heat.
It was also a much simpler architechture...but the perfomance and image quality....did not a proper GPU 😉
It's not that it's simpler; it's that it's not focused on gaming. AMD and Nvidia have much higher polygon rates, texel rates, and pixel fill rates. Intel on the other hand has much higher FLOPS.

As far as the image quality part goes, I have yet to see a legitimate source say as such.
 
Last edited:
That fact that there is a serious conversation about the iGPU's becoming a threat to dGPU's is in itself kind of surprising to me. It definitely does seem like we will be, or are already seeing a shift in the market.

Iris Pro with 40 EU's seems to perform at almost the level of GT640 with 384 Cuda cores, or perhaps somewhat below an AMD 7750 with 540 stream processors. So we can roughly approximate EU's to Cuda cores and AMD stream processors by multiplying EU's by 10. 40 EU's for Iris Pro is "about" 400 Cuda Cores/AMD Stream Processors.

If Broadwell GT4 has 96 EU's that's about 960 Stream Processors or around AMD 7850 or so performance. Certainly not a thread to the extreme high end with 2500+ stream processors but it is competitive in the low end and something for AMD and nVidia to be concerned about. Lots of games are very playable with a 7850.

I think one important question is how much processing power will games require going forward? If things seems to be leveling out then AMD and nVidia may be in trouble sooner rather than later. But if game demands continue to rise then they may have a little breathing room while the iGPU's continue to play catch up.

Also, up to this point Intel has shown little to no interest in selling GT3 with desktop parts. This may be because they see little demand in the desktop space and even less in the future though.

You have to remember, performance as such is secondary. Economics is the main priority. AMD and nVidia is currently having their mobile dGPU lines destroyed. And mobile accounts for something like 60%. On the desktop, its already getting hard to justfy a lot of the lower dGPUs.

All this volume is helping pay dGPU development. nVidia and AMD cant live by only selling GTX770/R280X and up. And IGP advancements will only further put pressure on this. Sooner or later, you end up with AMD and nVidia not developing any new dGPU. Even tho the IGP is not as fast as the top dGPUs.
 
Back
Top