CPU Architect from Intel does an AMA on Reddit. Few interesting excerpts.

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
According to Anandtech and Semiaccurate (although this source is not that credible), there will be a version of Haswell with on-package DDR3. Do you have any sources to the contrary? It'd be much appreciated!

All the ES samples sofar got nothing. Plus there is nothing that points to that Haswell got any interface of such kind to connect it to.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,277
614
126
What if they didnt?

Well you made a statement that you were sure that we won't see any major graphics performance increase until Skylake.

So are you sure they will not add fast on-die RAM dedicated to the iGPU in Broadwell?

Also, note that the AMA Intel guy said regarding graphics performance increases: "Haswell will improve over Ivy Bridge. Broadwell will be a bigger jump."

But I suppose you have better inside information than he does... :sneaky:
 
Last edited:
Mar 10, 2006
11,715
2,012
126
All the ES samples sofar got nothing. Plus there is nothing that points to that Haswell got any interface of such kind to connect it to.

Ah. Maybe the on-package DDR3 is only for select mobile models? Do you have engineering samples of Haswell?


...WHAT DO YOU KNOW? :D
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Well you made a statement that you were sure that we won't see any major graphics performance increase until Skylake.

So are you sure they will not add fast on-die RAM dedicated to the iGPU in Broadwell?

Also, note that the AMA Intel guy said regarding graphics performance increases: "Haswell will improve over Ivy Bridge. Broadwell will be a bigger jump."

But I suppose you have better inside information than he does... :sneaky:

Haswell uses 7+ generation of the IGP. Broadwell uses 8 generation. So would be wierd if it wasnt so...
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,277
614
126
Haswell uses 7+ generation of the IGP. Broadwell uses 8 generation. So would be wierd if it wasnt so...

I know. But do you still think that Broadwell->Skylake will be a bigger jump than Haswell->Broadwell? In that case why?

Also, from where do you know that Broadwell will not have on board dedicate iGPU RAM?

Does it make sense to just keep increasing the GPU performance if the RAM is bottlenecked anyway?
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I know. But do you still think that Broadwell->Skylake will be a bigger jump than Haswell->Broadwell? In that case why?

Also, from where do you know that Broadwell will not have on board dedicate iGPU RAM?

Does it make sense to just keep increasing the GPU performance if the RAM is bottlenecked anyway?

Bandwidth.

And yes it does. Specially for newer more "complex" demanding games and apps, rather than throughput. Yet the bottleneck still increases tho.

Its no different than with discrete cards.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,277
614
126
Bandwidth.

And yes it does. Specially for newer more "complex" demanding games and apps, rather than throughput. Yet the bottleneck still increases tho.

Its no different than with discrete cards.

You're contradicting yourself. You say that the reason Skylake will be a bigger jump is that it will get higher bandwidth with DDR4. Still you say that bandwidth is not so important after all for more "complex" demanding games and apps.

Also, since you did not answer my question regarding the on-die RAM, I take it you don't know and is just guessing. So in that case you're whole prediction could be incorrect, if Intel indeed will add on-die RAM in Broadwell.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
You're contradicting yourself. You say that the reason Skylake will be a bigger jump is that it will get higher bandwidth with DDR4. Still you say that bandwidth is not so important after all for more "complex" demanding games and apps.

Also, since you did not answer my question regarding the on-die RAM, I take it you don't know and is just guessing. So in that case you're whole prediction could be incorrect, if Intel indeed will add on-die RAM in Broadwell.

No I am not. You simply translate it into what you wish to read.

Llano and Trinity is a good example here. Both heavily memory starved. Yet Trinity still is still faster due to a higher performing GPU.

Example:
mem-5.png
 
Last edited:

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Releasing Haswell GTA3 without some fast embedded memory sounds like a misstep to me.. really not expecting that..

It's not like main memory bandwidth will play zero role in GPU performance, unless their embedded RAM ends up being 512+MB. But I also see no reason to assume they'll often be bandwidth limited with current DDR3. They're teetering the edge right now and GT3 probably won't be twice faster than HD4000 in top bin i7s due to much lower thermal envelope in its SKUs, so the embedded RAM won't need to cover even half the current bandwidth load. Seems reasonable to me. Meanwhile for the desktop parts the boost going to HD4600 or whatever isn't going to be that big and can probably be serviced by going up a grade in DDR3 clocks.

Intel and others have said Broadwell will feature a jump in GPU performance. I don't see why it wouldn't, it's a die shrink and they have room to grow the GPU executions again like they did with IB, so why wouldn't they? Will they need more bandwidth to accommodate this? Maybe. Will DDR4 bring the performance advantage? I doubt it.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
I get excited from time to time too, but typically not for "figuring out" the obvious. Intel not getting into the discrete GPU business and big chips = more money... Which one of these was ever a hot topic to the contrary?


You misinterpreted. The point is, igpu is always going to be crap compared to discrete (as long as they continue to be manufactured). No one would pay for the gargantuan die needed to make a decent igpu as well as a decent cpu when the total package would cost far more than separate parts.
 

Haserath

Senior member
Sep 12, 2010
793
1
81
They used to sell ~300mm^2 die for $200-300. I wonder if they would be able to do it again?

Ivy is a shrimp at 160mm^2.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,277
614
126
No I am not. You simply translate it into what you wish to read.

Llano and Trinity is a good example here. Both heavily memory starved. Yet Trinity still is still faster due to a higher performing GPU.

The Trinity iGPU is not that much faster than the Llano iGPU.

There's a point where increasing the GPU performance without increasing the RAM bandwidth does not yield much results.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
The Trinity iGPU is not that much faster than the Llano iGPU.

There's a point where increasing the GPU performance without increasing the RAM bandwidth does not yield much results.

Its still better isnt it? You deal with the limits you got. And for AMD for example the benefit was worth it.

Plenty of discrete cards in the lower end are heavily starved. Yet we still got those cards.
 
Last edited:

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
They used to sell ~300mm^2 die for $200-300. I wonder if they would be able to do it again?

Ivy is a shrimp at 160mm^2.


You wouldn't be willing to pay for it.

Die area without node information is meaningless. 160mm^2 @ 22nm is pretty much the same as 300mm^2 @ 32nm . Each shrink has massive capital costs and as such, we get a smaller die because few are willing to pay the price of the same die area with the premium needed to fund the node shrink and continued R&D.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,277
614
126
Its still better isnt it? You deal with the limits you got. And for AMD for example the benefit was worth it.

Plenty of discrete cards in the lower end are heavily starved. Yet we still got those cards.

Well, it's not like they have a lot of choice with Trinity. They don't have the die space to add substantial amounts of on-die RAM on 32 nm. That is unless they would remove some cores or make it ridiculously large and expensive, none of which are acceptable.

Also, you said that Broadwell will be so severely memory bandwidth limited that it will not be until we get DDR4 with Skylake that there will be a big graphics performance jump.

That seems to contradict what the Intel AMA guy said. So based on that, how can you be so sure Intel will not add on-die dedicated iGPU RAM in Broadwell?
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
You misinterpreted. The point is, igpu is always going to be crap compared to discrete (as long as they continue to be manufactured). No one would pay for the gargantuan die needed to make a decent igpu as well as a decent cpu when the total package would cost far more than separate parts.

ok... When was that ever under any real debate??? Most peoples prospects for iGPU is that it will be good enough for casual gaming on a laptop when they're on the road or something. Very few people see it completely replacing dGPU, particularily for PC gamers. You made it sound like you were the only voice of reason by somehow predicting iGPU aren't going to be as powerful as dGPU.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
ok... When was that ever under any real debate??? Most peoples prospects for iGPU is that it will be good enough for casual gaming on a laptop when they're on the road or something. Very few people see it completely replacing dGPU, particularily for PC gamers. You made it sound like you were the only voice of reason by somehow predicting iGPU aren't going to be as powerful as dGPU.

Read some of the threads on here. There are a disturbing number of people who actually believe that igpus are something other than a step backwards and will lead the way to faster graphics...

All they really do is lower the cost of the very, very low end while the rest of us subsidize their development and are stuck with a rather significant die area that we will never get any use out of.
 
Last edited:

2is

Diamond Member
Apr 8, 2012
4,281
131
106
I've read plenty of threads on here, where you see a disturbing number I see an extrememly small minority.
 

Fx1

Golden Member
Aug 22, 2012
1,215
5
81
Thats simply wrong. Samsungs smartphone division uses what they want. You would find that alot of Samsungs smartphones dont use Samsung CPUs.

Only because Samsung has limited capacity and also doesnt really do the lower end.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Only because Samsung has limited capacity and also doesnt really do the lower end.

They sure have alot of lower end dont they? Or is their capacity just so tiny?

The S3 Mini doesnt use Samsung. None of the flagship Galaxy S3 for the japanese and north american market neither for that matter.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
I've got higher standards. I think even 5 people with that belief is a disturbing number.

Here is some reference material. http://forums.anandtech.com/showthread.php?t=2290676

edit: yes, they're a small minority, but it bothers me that they exist at all ;)

What you call high standards everyone else would merely call your opinion. Seems these delusions of grandeur aren't limited to figuring out what everyone else already new. ;)
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Read some of the threads on here. There are a disturbing number of people who actually believe that igpus are something other than a step backwards and will lead the way to faster graphics...

Before CPU IGPs what we got instead was a majority of computers using chipset IGPs. Intel having the greatest volume in GPUs is nothing new, it goes back several years.

The old IGPs were on old processes and barely got any attention, they pretty much started out just because there was pad limited space left over. CPU IGPs share cutting edge process technology and benefit from tighter integration with high end CPU designs (so not just good for CPU communication but they get stuff like shared cache). The benefit for power consumption is obvious.

Motherboard IGPs were marginalized even further once the memory controller moved on die. You don't want to go back to those do you? Or do you think the whole market should now embrace discrete GPUs? If you answer neither then you shouldn't call CPU IGPs a step backwards.

Of course none of that is counter to your point that they're not going to start threatening serious discrete GPUs.

All they really do is lower the cost of the very, very low end while the rest of us subsidize their development and are stuck with a rather significant die area that we will never get any use out of.

They're not lowering the cost of the lowest end discrete GPUs, there wasn't really any cost to cut there to begin with; no, they're getting rid of them altogether. The baseline for low end discretes is soon going to be offering something tangibly better than the best APUs. Otherwise you just can't compete on price; for most people the IGP is going to be essentially free.

The problem is that discrete GPU vendors were using the income from the low end GPUs to fund development of the high end ones. But AMD replaced that with APUs and nVidia probably moved to feeding off of mobile SoC sales.