Broadwell GT3 48EUs? TDP range 4.5W-47W

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
All this volume is helping pay dGPU development. nVidia and AMD cant live by only selling GTX770/R280X and up. And IGP advancements will only further put pressure on this. Sooner or later, you end up with AMD and nVidia not developing any new dGPU. Even tho the IGP is not as fast as the top dGPUs.

AMD might be able to fund their GPU R&D with their APU. They can reuse a lot of the same tech here, but that business is already loosing money and their balance sheet is in shambles. They are in trouble, and will need *a lot* of embedded business to keep on playing.

Nvidia is in a different situation. Their bread and butter business will be threatened by the iGPU, but their PSB business is strong enough to fund R&D the graphics update and they are sitting in a big pile of cash. They need to find a escape door as soon as possible, but they have the time to search for it.
 

9enesis

Member
Oct 14, 2012
77
0
0
Sooner or later, you end up with AMD and nVidia not developing any new dGPU. Even tho the IGP is not as fast as the top dGPUs.
whereas nvidia being dead in the water(if they continue tegra way) at that time.....in some 5 to 7 years i guess...:biggrin:
 

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
You have to remember, performance as such is secondary. Economics is the main priority. AMD and nVidia is currently having their mobile dGPU lines destroyed. And mobile accounts for something like 60%. On the desktop, its already getting hard to justfy a lot of the lower dGPUs.

All this volume is helping pay dGPU development. nVidia and AMD cant live by only selling GTX770/R280X and up. And IGP advancements will only further put pressure on this. Sooner or later, you end up with AMD and nVidia not developing any new dGPU. Even tho the IGP is not as fast as the top dGPUs.
You're forgetting one very important market, the HPC crowd. The professionals that use the Quadro/Firepro and Tesla GPUs. Top end consumer video cards are derived from these parts. It costs relatively little to sell the lower binned chips to gamers and hobbyists.

Unless iGPUs can satisfy the compute crowd, dGPUs are much more likely to survive.
 
Last edited:

NTMBK

Lifer
Nov 14, 2011
10,483
5,902
136
You're forgetting one very important market, the HPC crowd. The professionals that use the Quadro/Firepro and Tesla GPUs. Top end consumer video cards are derived from these parts. It costs relatively little to sell the lower binned chips to gamers and hobbyists.

Yes, but the volume of the low end consumer market is what funds the shared R&D that the HPC market uses. The economics become very different without a low end consumer market to help subsidise the R&D.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
You're forgetting one very important market, the HPC crowd. The professionals that use the Quadro/Firepro and Tesla GPUs. Top end consumer video cards are derived from these parts. It costs relatively little to sell the lower binned chips to gamers and hobbyists.

The HPC crowd they will both lose to Xeon Phi.

And for the professionals, AMD already showed where their fate lies by already selling certified IGPs.

http://www.amd.com/us/products/workstation/graphics/ati-firepro-3d/APU/Pages/APU.aspx
 

mikk

Diamond Member
May 15, 2012
4,311
2,395
136
I can say from experience Intel gfx performance is still inconsistent and a far cry from the synthetic bm results. Do note that when you compare. Use actual games and less known games. I would take a nv 640 over some 40-50 eu cached solution if its video or gaming performance its about. Its about consistent driver quality. Add Intel have a history of abandon their older gfx leaving the customers with non functioning or buggy drivers.


I don't think it is driver related. This alone surely is not substantial anymore. Gen7 has some flaws, in particular the weak ROP performance which is one reason why Intels MSAA performance is so bad. As mentioned Gen8 is vastly different with almost twice the sampler as well as 2xMSAA support in hardware finally. It is also important to note that the majority of game devs usually don't test and optimize their game for an Intel iGPU. In the last 1-2 years issues with Intel iGPUs were often App related not driver related.
 

9enesis

Member
Oct 14, 2012
77
0
0
Quadro/Firepro and Tesla GPUs
those are likely to become obsolete compute technology at the time...

ETA: what AMD or INTEL needs to do is: make a decent APU , mean something like i5sb+R7850 (performance wise) on single die within a reasonable TDP - end of the story..... then "crossfire" several of thhose(or whatever you wish call it) on a single MOBO....

P.S. oh...forgot about stacked RAM.....sorry, mix that in too:D
 
Last edited:

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
I think one important question is how much processing power will games require going forward? If things seems to be leveling out then AMD and nVidia may be in trouble sooner rather than later. But if game demands continue to rise then they may have a little breathing room while the iGPU's continue to play catch up.

I would say that the non-gaming portion of the mobile market far exceeds that of actual gamers. Keep in mind that generally speaking, people do not buy macbook airs for gaming. They do not buy macbook pros for gaming. Can they play lightweight games now and then? Sure. But that isn't the primary purpose.

Gaming is a nice side effect, but it isn't the end-all or be-all for intel. Intel doesn't need to match the GTX 780 to have a successful iGPU product. Remember, Apple is the very reason that intel created their iGPU years ago - and it wasn't because of gaming. Most students are buying ultrabooks and macbooks for productivity and media consumption, with perhaps a little "lightweight" gaming on the side. Older folks are buying macbooks and ultrabooks for media consumption and basic application usage - most of them aren't hardcore gamers.

Now I do think iGPU will catch up to dGPU at some point (years from now), but it's not super critical for that to happen anytime soon. Gaming on a mobile device is generally an afterthought. Otherwise if you want a full time gaming laptop, that is a different sector of market necessitating a mobile dGPU such as the GTX780m or something along those lines. Intel doesn't have to match that to have a great product, as mentioned. Most macbook purchasers tend to focus on battery life and system responsiveness in terms of initial impression; intel's mobile CPUs work fine in this area - the new macbooks are getting 13 hours of battery life with excellent 2d UI responsiveness.

I'm sure gaming will improve on iGPUs and APUs at some point, but it isn't the primary reason for iGPU to exist. It's mostly a secondary benefit. For someone who *really* wants to game - they'll just buy an alienware laptop or something.
 
Last edited:

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
The HPC crowd they will both lose to Xeon Phi.
The Xeon Phi is a dedicated card of sorts, just a different architecture from standard GPUs. Give it a geometry engine and a couple other things like rops, and now you got a video card with some very easy to program shaders. :p
 
Last edited:

Ventanni

Golden Member
Jul 25, 2011
1,432
142
106
AMD might be able to fund their GPU R&D with their APU. They can reuse a lot of the same tech here, but that business is already loosing money and their balance sheet is in shambles. They are in trouble, and will need *a lot* of embedded business to keep on playing.

Nvidia is in a different situation. Their bread and butter business will be threatened by the iGPU, but their PSB business is strong enough to fund R&D the graphics update and they are sitting in a big pile of cash. They need to find a escape door as soon as possible, but they have the time to search for it.

I agree here. If AMD can get their APU sales up, then we get to keep our Radeon dGPU options for future gaming since APU sales will simply overlap the low end market. Other markets are HPC, server compute, mobile GPU tech licensing, and custom designs.

Nvidia's boat differs since they don't make [x86] CPUs themselves. They could alter their licensing deal with Intel to generate more revenue for them. They also have a strong foundation in HPC and mobile tech, which will aid dGPU development. I firmly believe that the key platform for funding the R&D for dGPUs lies with Nvidia's Tegra line though.

Or what might also happen is AMD and Nvidia simply raising the prices of their dGPU line to help fund the R&D efforts?
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
The Xeon Phi is a dedicated card of sorts, just a different architecture from standard GPUs. Give it a geometry engine and a couple other things like rops, and now you got a video card with some very easy to program shaders. :p

Not really, the SP performance of Xeon Phi is low, the DP performance however high.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Or what might also happen is AMD and Nvidia simply raising the prices of their dGPU line to help fund the R&D efforts?

That would lower the volume. So a price increase could have a negative impact depending on where the sweet spot is. It could also have a positive. For CPUs for example, Intel is pretty much showing where the sweetspot between price and volume is. Lower or higher price would, most likely, lower profit.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
I don't think it is driver related. This alone surely is not substantial anymore. Gen7 has some flaws, in particular the weak ROP performance which is one reason why Intels MSAA performance is so bad. As mentioned Gen8 is vastly different with almost twice the sampler as well as 2xMSAA support in hardware finally. It is also important to note that the majority of game devs usually don't test and optimize their game for an Intel iGPU. In the last 1-2 years issues with Intel iGPUs were often App related not driver related.

You are probably right. But why do the devs not make it work for Intel? Why dont the devs pay for it? Is it to difficult not good cost benefit???
And can you trust they will in the future?

Whatever the reason. As consumer its a risk now and certainly is untill Intel have proved otherwise.

But notice you say gen8 will fix it. for the last two gens i thought the same. But its the same story each time. Next time ...now i will wait for proof.
Hell ib was still buggy for extreme simple vid decoding. My 6 years old 8600m laptop could do that better - (before it exploded. Lol)
 

SammichPG

Member
Aug 16, 2012
171
13
81
Why won't you answer my question?
Why should I settle for less?
Why should the lower standards of consoles infect PC's?

If you like IGP...buy a console..stop acting like IGP's will kill GPU's tomorrow.

When my rig needs to be replaced, I will bet that I can get a CPU + GPU combo no IGP can touch...I use PC's due to the performance, not the lack of performance.

In 5 years IGP's will still be chasing GPU's in performance...want to take a bet on it?
And since gaming requirements are not static...IGP's will still lack behind GPU's in the future too.

But like I said...give a call when a CPU+IGP beats my current rig...unless you are promoting a lot of us people downgrade our performance just to go IGP?

Your point is moot.
IGP is progressing faster than screen resolution and hardware demands from games.
Right now only casual players and low resolution users are fine with IGP performance, soon (less than 10 years) the vast majority of people will be completely satisfied by an IGP and only the most dedicated consumers will have the money and motivation to buy a discrete card.

It's most likely going to look like the modern pc audio market with most people using the integrated sound chip on the motherboard.

We will get cheap computers that are good enough and you'll be running your games with AA16x on 8k screens.

In b4 we start everything over with hardware accelerated raytracing. :biggrin:
 

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
Your point is moot.
IGP is progressing faster than screen resolution and hardware demands from games.
Right now only casual players and low resolution users are fine with IGP performance, soon (less than 10 years) the vast majority of people will be completely satisfied by an IGP and only the most dedicated consumers will have the money and motivation to buy a discrete card.

It's most likely going to look like the modern pc audio market with most people using the integrated sound chip on the motherboard.

We will get cheap computers that are good enough and you'll be running your games with AA16x on 8k screens.

In b4 we start everything over with hardware accelerated raytracing. :biggrin:
The bold is what I'm waiting for. I want so very badly. ^_^

The only thing remotely coming close for my use is abusing CUDA based rendering on Nvidia cards. Now, what would be cool for future CG movies is having the render data fed to our movie-playback devices, and said devices render the movie on our TV in real-time, at whatever resolution is needed. With use of procedurals, data needed to be loaded would be relatively minimal, giving equal quality between physical and digital media without killing bandwidth and data caps. And the kids would actually have something to mess around with (cg art) instead of consuming content all day.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Intel drivers.

Using a macbook and various ultrabooks, I have never, not once, ever had an issue with intel GPU drivers. Ever. So i'm quite confused about this entire intel drivers myth.

Wish I could say the same about AMD mobile GPU drivers, which tend to mysteriously fix one thing and break another. On that note, AMD's mobile GPU drivers, if anything, are far worse than intel or nvidia. Their dynamic GPU switching basically........doesn't work. Ever. Whereas Optimus works fine in every instance i've seen it in action.

This is why Nvidia's mobile Kepler GPU has won the mobile dGPU market and won big. Every design of note with a mobile dGPU has nvidia in it with very very few exceptions. There's a reason for this. AMD's mobile GPU drivers are horribad. Their desktop GPU drivers? Passable. Maybe decent, even if their desktop drivers aren't as polished as nvidia's - Most people will do fine. Wish I could say the same for their mobile drivers. Their answer to nvidia's optimus technology basically doesn't ever work properly. And it goes without saying that nvidia has better efficiency across the board than AMD does, so battery life favors both intel and nvidia in the mobile space, even if one ignores the driver situation altogether.. Intel is certainly leagues better than AMD in terms of mobile graphics drivers. I have not ever had an issue with any HD4xxx based portable.
 
Last edited:

mikk

Diamond Member
May 15, 2012
4,311
2,395
136
You are probably right. But why do the devs not make it work for Intel? Why dont the devs pay for it? Is it to difficult not good cost benefit???
And can you trust they will in the future?

Whatever the reason. As consumer its a risk now and certainly is untill Intel have proved otherwise.

But notice you say gen8 will fix it. for the last two gens i thought the same. But its the same story each time. Next time ...now i will wait for proof.
Hell ib was still buggy for extreme simple vid decoding. My 6 years old 8600m laptop could do that better - (before it exploded. Lol)


I guess it's too slow to work with for a render coder in a productive way usually. I know from one dev they frequently get CPU offers with integrated iGPUs from Intel. They more or less refuse the iGPU. The render team is working with dedicated GPUs from Nvidia and AMD. Maybe this will change sometime but this is a long process, maybe in 2-3 years it is different. I haven't said Gen8 fixed all GenX flaws, I just pointed out that Gen8 is a big redesign which hopefully can solve the biggest GenX flaws. I would like to see a much better MSAA handling, this is my main flaw. 2xMSAA in hardware is already confirmed by Intel.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
I'm skeptical that Broadwell will be the panacea in iGPUs.

Remember the GPU performance claim of GT3e Broadwell K? 80% faster than previous generation, which likely refers to Haswell K, with GT2. Meaning architectural improvements on Broadwell GPU is maybe only 20-40%. I've argued this part with various others here before. 80% in GT3 vs GT3 is probably wild fantasy. Comparing Iris 5100(28W) versus HD 4400(15W) doesn't even show 50% gains, and that's a 2x TDP increase with 2.5x EU increase. Are you telling me 20% increase in EUs from Broadwell will magically provide 80%?!?

Also, mind you the Haswell iGPU was overhyped as well. Everyone thought we'd be getting 2x performance over Ivy Bridge at all power levels. It's mere 20% faster in most segments, and to get the big one, the Iris Pro, you need to shell out for a expensive Laptop.

Tell me, where is the elusive $300 R series chip? Aside from maybe 3 designs in OEM Laptops, MacBook Pro, and BRIX, where do you see GT3e used?

Nobody uses them because performance/$/watt sucks on the Iris parts. So really, Haswell didn't advance things for the most important segment of the market.

Also, look how the iGPU gains are becoming smaller and more niche:
-Sandy Bridge: 2-2.5x
-Ivy Bridge 40-60%
-Haswell 20-30%, 2.5x if you buy the overpriced, unavailable GT3e parts

GT4 availability? Latest beta driver leaks show GT4 only in "test" configurations. Hardware only has it available to GT3. And package pics of the Broadwell ULT/ULX shows that at least there, we're not getting the eDRAM.
 
Last edited:

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
I'm skeptical that Broadwell will be the panacea in iGPUs.

Remember the GPU performance claim of GT3e Broadwell K? 80% faster than previous generation, which likely refers to Haswell K, with GT2. Meaning architectural improvements on Broadwell GPU is maybe only 20-40%. I've argued this part with various others here before. 80% in GT3 vs GT3 is probably wild fantasy. Comparing Iris 5100(28W) versus HD 4400(15W) doesn't even show 50% gains, and that's a 2x TDP increase with 2.5x EU increase. Are you telling me 20% increase in EUs from Broadwell will magically provide 80%?!?
Who said 80%?

Also, realize that even a 20% performance increase from "architecture" alone is quite large.

If we assume that it's 80% GT3 vs Haswell GT2... does it really matter that it's "only" a comparison towards GT2? An upgrade is an upgrade -- why complain about it, if it's presumably not going to cost more? Also, the EUs are just the shaders. Other parts of the GPU make be expanding as well.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
16,963
7,379
136
Well, if you go back and look at the review, the Iris Pro does really well in compute but not in fill rate and the rest of the graphics system. If they focused on making it more well rounded, it could hit those numbers.

And yes, pricing is a problem. But it's Intel, so that shouldn't be a shock. It'll still be faster than Kaveri, esp since it doesn't have anything to help with bandwidth like the Iris Pro does.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
It'll still be faster than Kaveri, esp since it doesn't have anything to help with bandwidth like the Iris Pro does.

It will not be faster than Kaveri without eDRAM. Using DDR3 only it will still be slower than Kaveri in the majority of workloads and Games.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
It will not be faster than Kaveri without eDRAM. Using DDR3 only it will still be slower than Kaveri in the majority of workloads and Games.

Because of 1600Mhz vs 2133/2400Mhz. If Kaveri is paired with 1600Mhz, it wont be faster.