• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

Intel and eDRAM vs GDDR5

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
I will be surprised if there end up being any enthusiast boards with GDDR5. If the rumors are true I think we are talking about OEMs like HP and Lenovo buying GDDR5 in bulk.

Bulk? I don't think you have a clue of the magnitude of the problem. Today we should have something like 3.5 million Trinity systems per quarter, or roughly one million per month.

If we assume that half of those systems will be GDDR5 APU, and there will be four big OEMs buying memory, that's roughly 150K kits per month. I don't think 150K kits per quarter would give any kind of leverage to whoever is trying to buy this from a memory maker.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
By that logic, couldn't we also ask why eDRAM wasn't used in Sandy Bridge or Ivy Bridge?

To my knowledge, the IGP in neither of those products is sorely lacking ram bandwidth. It simply hasn't been needed until now.

But it is fairly common knowledge among enthusiast and review sites that AMD's APUs, since Llano anyways, do benefit from higher bandwidth and need all the help they can get there.

On the flip-side, what AMD is doing is rather obvious given what they have to work with. They are basically trying to turn their CPUs into GPUs, and the motherboard into a GPU PCB to go along with it...because they know how to make discrete video cards.

Not that there is anything wrong with that, go with what works until it stops working, right? Do we really care if our N+2 gaming computers look like a glorified video card? I don't care how it looks, if the price/performance is there then why not run with it?
 

joshhedge

Senior member
Nov 19, 2011
601
0
0
IDC, what do you think the probability will be that Apple manages to get a custom dual core SOC with eDRAM for their 13inch Retina Pro? Given that the TDP headroom in the 13inch isn't high enough for the quad core with eDRAM.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Intel sells components, if they see a potential for Apple with a dual core + GT3e, they would turn that around and sell it to everybody else. Because its stupid to limit a chip on a device that sells few hundred k units or so.

That's exactly what they did when they made "custom" Core 2 Penryn chips for Apple. Soon enough, it became an official product.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
IDC, what do you think the probability will be that Apple manages to get a custom dual core SOC with eDRAM for their 13inch Retina Pro? Given that the TDP headroom in the 13inch isn't high enough for the quad core with eDRAM.

I'd say 50/50.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
If there's one thing they don't do, is explicitely limit certain chips to a single customer. I mean come on, the only reason we have the Xeon E series chips are because they are basically hand-me downs from the E5 folks. :)
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
According to Tomshardware, the custom eDRAM in Haswell is a 1.2GHz part. That implies 77GB/s bandwidth.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
According to Tomshardware, the custom eDRAM in Haswell is a 1.2GHz part. That implies 77GB/s bandwidth.

From what I heard they are shooting for around 650m/660m performance target, I get over 100GB/s from my 660m, 80GB/s at stock. I don't see how they could achieve that performance, on the other hand my brother has 650m ddr 3 and it's only 20% slower, over 100GB/s for 384 kepler cores is a waste, 650m has what 28.8 of bandwidth? We compared overclocked cards 34GB/s or so (I don't remember the exact memory clock) vs over 100GB/s and the increase in performance was just 20%. GPU clock was the same, both manufacturers limited the OC to +135MHz offset def gpu clock was the same. Shame my titan can't do even that :D What a dud I got, because of dishonest seller. He selected the cards he had.
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
Bulk? I don't think you have a clue of the magnitude of the problem. Today we should have something like 3.5 million Trinity systems per quarter, or roughly one million per month.

If we assume that half of those systems will be GDDR5 APU, and there will be four big OEMs buying memory, that's roughly 150K kits per month. I don't think 150K kits per quarter would give any kind of leverage to whoever is trying to buy this from a memory maker.

I do have a clue of the magnitude of the problem. What constitutes bulk to you? Keep in mind we can pull up dGPU sales and the numbers aren't that high (especially considering not all use GDDR5, and most have much less than 4gb of memory). I do not think the price of GDDR5 will be a problem.
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
Can we really trust all the leaks about the only quad core haswells with edram being in the 47w+ club? It is perfectly possible we are looking at only the best cpus instead of the mainstream.

All the leak info we seen about these 47w cpus have maximum turbo of 3.5 to 3.9 ghz for comparison (the closer you get to 4 ghz the more you have to raise the tdp, keep it closer to 3 ghz and you get much better performance/watt scaling)

the i7-3632qm has a tdp of 35 watts. It is a 2.2 ghz quad with a max turbo of 2.9 ghz for all 4 cores, 3.1 ghz max turbo dual core, and 3.2 ghz max turbo single core.

the i7-3630qm has a tdp of 45 watts. It is a 2.4 ghz quad core with a max turbo of 3.2 ghz for all 4 cores, 3.3 ghz max turbo dual core, 3.4 ghz max turbo single core. Up to 28% more tdp for about 13% more performance (45w vs 35w). (note the 45w and 35w versions have the same tray price, who knows what is the final price for the oems)

the i7-3940xm (extreme edition) has a tdp of 55 watts. It is a 3.0 ghz quad core with a max turbo of 3.7 ghz all 4 cores, 3.8 ghz max turbo dual core, 3.9 ghz max turbo single core. Up to 57% more tdp for about 28% more performance (55w vs 35w). (note the 55w has a tray price about tripled compared to the 35w version, you are paying for that extreme edition.)
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
From what I heard they are shooting for around 650m/660m performance target, I get over 100GB/s from my 660m, 80GB/s at stock. I don't see how they could achieve that performance, on the other hand my brother has 650m ddr 3 and it's only 20% slower, over 100GB/s for 384 kepler cores is a waste, 650m has what 28.8 of bandwidth? We compared overclocked cards 34GB/s or so (I don't remember the exact memory clock) vs over 100GB/s and the increase in performance was just 20%. GPU clock was the same, both manufacturers limited the OC to +135MHz offset def gpu clock was the same. Shame my titan can't do even that :D What a dud I got, because of dishonest seller. He selected the cards he had.

Don't forget that that is gpu only bandwidth. The cpu also requires bandwidth.

If you test at higher resolutions or AA the delta goes up significantly.
 

erunion

Senior member
Jan 20, 2013
765
0
0
Buying Intel isn't stupid, paying $50 extra for a little eDRAM because you want to play games on an Intel integrated GPU when you could have an actual GPU for the same money is.

Actually, if GT3e really hits 650m level performance than Intel is either going to eat Nvidia/AMD's lunches or will within the next iteration or two.

Every dollar(few for GT3, few more for eDRAM) that Intel can get for its high end iGPUs comes right out of the pockets of Nvidia and AMD.

That is really what Intel is competing with at this point. APUs aren't a serious contender.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
Before you doom nv or amd lets try to game on theese cpu in real world scenarios. Latency, min frames, quality...
 

NTMBK

Lifer
Nov 14, 2011
10,455
5,842
136
That is really what Intel is competing with at this point. APUs aren't a serious contender.

Intel is selling APUs, silly. :p

Honestly though, why would APUs not be serious contenders? If we're measuring the gaming performance of the APU alone, not with a dGPU, then that is the one place where AMD becomes genuinely competitive again. We've not seen benches for Richland or Haswell yet, so we can't say for sure, but it should be an interesting fight.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Before you doom nv or amd lets try to game on theese cpu in real world scenarios. Latency, min frames, quality...


When was the last time you saw a review site, look at image quality of Intels IGP's? or their Latency or min frames?

They dont even bother compairing IGPs with low performance discretes.
You ll have a hell of a hard time, finding a review of IGPs that has a Nv 5570,440,6670,640, on them.

When IGPs come out with Kavari and Haswell, I ll be disappointed if all they compair them too is the other IGPs of CPU/APUs of old.
 

erunion

Senior member
Jan 20, 2013
765
0
0
Intel is selling APUs, silly. :p

Honestly though, why would APUs not be serious contenders?.

APU is an AMD brand. I was using it in its proper sense.

They aren't competitors because they have negligable marketshare. Intel isn't releasing $300+ cpus to take marketshare from $120 Richlands. They are doing it to increase ASP and their % of the BoM on Intel machines by cutting out the gpu vendors.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Sweepr already posted this from the Anandtech Pipeline article on Haswell from earlier this week:

"Based on leaked documents, the embedded DRAM will act as a 4th level cache and should work to improve both CPU and GPU performance. In server environments, I can see embedded DRAM acting as a real boon to multi-core performance."

In my opinion this is much more interesting than how it affects the iGPU. This means Haswell GT3e even without using the iGPU should see better CPU performance. L4 cache -- this is pretty significant.

A curious observation: with a Haswell GT3e rig and an SSD with SRT you could potentially have to pass through registers, uOp cache, L1, L2, L3, L4, DRAM and SSD caches before you even hit HDD. Just glad I don't have to write code that manages that kind of memory...
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
A curious observation: with a Haswell GT3e rig and an SSD with SRT you could potentially have to pass through registers, uOp cache, L1, L2, L3, L4, DRAM and SSD caches before you even hit HDD. Just glad I don't have to write code that manages that kind of memory...

No one has to, almost all of that memory management is transparent to software. The only parts that aren't would be register usage for the application (usually managed by a compiler) and disk caching usually done by the OS.
 

lagokc

Senior member
Mar 27, 2013
808
1
41
Sweepr already posted this from the Anandtech Pipeline article on Haswell from earlier this week:

"Based on leaked documents, the embedded DRAM will act as a 4th level cache and should work to improve both CPU and GPU performance. In server environments, I can see embedded DRAM acting as a real boon to multi-core performance."

In my opinion this is much more interesting than how it affects the iGPU. This means Haswell GT3e even without using the iGPU should see better CPU performance. L4 cache -- this is pretty significant.

A curious observation: with a Haswell GT3e rig and an SSD with SRT you could potentially have to pass through registers, uOp cache, L1, L2, L3, L4, DRAM and SSD caches before you even hit HDD. Just glad I don't have to write code that manages that kind of memory...

Now if only Intel would use the eDRAM in its 10-core Xeons where it makes sense, not merely in laptops where it doesn't.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
No one has to, almost all of that memory management is transparent to software. The only parts that aren't would be register usage for the application (usually managed by a compiler) and disk caching usually done by the OS.

It was merely an observation. You don't need to tell me, I'm a programmer by trade. Simply remarking on how much I enjoy high level languages compared to the morass it could have been otherwise...

While I'm sure it will be most useful in server chips, I wonder what kind of real world CPU improvements the laptop versions will see over the eDRAM-less varieties
 
Last edited: