Intel and eDRAM vs GDDR5

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
eDram is nice because Intel can control production with marginal cost vs. marginal revenue using its excess production capacity. EDram is a perfect product for that.

eDRAM is also perfect because OEMs can use the same platform for eDRAM and no eDRAM and save cost. 2 solutions, 1 platform = more profit.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
AMD isn't going to be buying it, motherboard/laptop makers will be buying it at good prices and integrating it into their boards.

Good prices?

AMD is a nobody in the memory market, and the MB makers that will buy even smaller volumes will get good prices?

I smell blood here.
 

lagokc

Senior member
Mar 27, 2013
808
1
41
Are you forgetting the huge CPU performance difference?

The point is if you're willing to pay the extra $200 for an i7's CPU performance you're probably going to be willing to pay another extra $100 for a standalone GPU rather than use Intel's joke of a GPU. If you want a laptop that can play games AMD can do that, Intel can not.

If you're only concerned about CPU performance and not games then why bother with the GT3e when the GT3 is $50 cheaper?
 

lagokc

Senior member
Mar 27, 2013
808
1
41
Good prices?

AMD is a nobody in the memory market, and the MB makers that will buy even smaller volumes will get good prices?

I smell blood here.

Good prices means better than consumer retail prices which is what you're used to paying for DIMMs.
 

mrmt

Diamond Member
Aug 18, 2012
3,974
0
76
Good prices means better than consumer retail prices which is what you're used to paying for DIMMs.

The DDR3 DIMM price is a market far bigger than the market for GDDR5 chips for AMD niche laptops.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
The point is if you're willing to pay the extra $200 for an i7's CPU performance you're probably going to be willing to pay another extra $100 for a standalone GPU rather than use Intel's joke of a GPU. If you want a laptop that can play games AMD can do that, Intel can not.

If you're only concerned about CPU performance and not games then why bother with the GT3e when the GT3 is $50 cheaper?

Your statement doesnt mix with sales numbers. Ever thought about why?
 

lagokc

Senior member
Mar 27, 2013
808
1
41
Oh yes, the classic excuse for why everyone dont buy AMD?

Buying Intel isn't stupid, paying $50 extra for a little eDRAM because you want to play games on an Intel integrated GPU when you could have an actual GPU for the same money is.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,143
136
Buying Intel isn't stupid, paying $50 extra for a little eDRAM because you want to play games on an Intel integrated GPU when you could have an actual GPU for the same money is.

You could say something similar about AMD's IGP+GDDR5 vs a reasonable discrete GPU though. There are IB-based laptops with GT730M for less than U$600 now. I think the point is bringing higher graphical performance to models that dont have room for a dGPU. Intel should realease a 37W TDP 2-4C Haswell with GT3e for 13.3'' notebooks like the Retina MBP.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Buying Intel isn't stupid, paying $50 extra for a little eDRAM because you want to play games on an Intel integrated GPU when you could have an actual GPU for the same money is.

Careful there, you are poking holes in AMD's entire "the future is fusion" silliness ;)
 

beginner99

Diamond Member
Jun 2, 2009
5,318
1,763
136
So you're really looking at:
eDRAM: $50 for eDRAM + $30 for 4GB memory
GDDR5: $55 for 4GB memory

Also you're completely ignoring the fact that the i7 isn't going to be a little more expensive than the A10, the i7 is going to be MASSIVELY more expensive than the top A10. If Intel was going to be using the GT3e in their i3 CPUs it might make sense but an integrated GPU in a hugely expensive i7 is absurd.

Well 4 Gb of non-extensible main memory shared with the GPU is IMHO pretty limiting. It should be rather 8 GB like on the PS4...and then price doubles.
 

R0H1T

Platinum Member
Jan 12, 2013
2,583
164
106
Well 4 Gb of non-extensible main memory shared with the GPU is IMHO pretty limiting. It should be rather 8 GB like on the PS4...and then price doubles.
Well the GDDR5 could be used for GPU only, like sideport memory incase of 890GX chipset, & there is absolutely no indication that AMD is going down that road cause they'll probably be using DDR4 in the not too distant future !
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Well the GDDR5 could be used for GPU only, like sideport memory incase of 890GX chipset, & there is absolutely no indication that AMD is going down that road cause they'll probably be using DDR4 in the not too distant future !

DDR4 won't double bandwidth right away. We'll see DDR4-2133 modules first and higher ones later.

Anyway the target of GT3e for Haswell is perfect for smaller quad core machines like Asus UX51VZ, Clevo 11-inch, and of course the Macbooks. Right now they are significant engineering efforts but with Haswell might allow it to become more mainstream.
 

R0H1T

Platinum Member
Jan 12, 2013
2,583
164
106
DDR4 won't double bandwidth right away. We'll see DDR4-2133 modules first and higher ones later.

Anyway the target of GT3e for Haswell is perfect for smaller quad core machines like Asus UX51VZ, Clevo 11-inch, and of course the Macbooks. Right now they are significant engineering efforts but with Haswell might allow it to become more mainstream.
I never implied such a thing. I merely stated that GDDR5 would be used for GPU only, starting with Kaveri, however depending on how the platform(& DDR4) evolves they may dump the dual memory structure, what the early HSA products could/would be based upon.

This is all mere speculation of course because for all we know there might not be GDDR5 in Kaveri !
 
Last edited:

lagokc

Senior member
Mar 27, 2013
808
1
41
I never implied such a thing. I merely stated that GDDR5 would be used for GPU only, starting with Kaveri, however depending on how the platform(& DDR4) evolves they may dump the dual memory structure, what the early HSA products could/would be based upon.

This is all mere speculation of course because for all we know there might not be GDDR5 in Kaveri !

I don't think they have enough pins to do a seperate GDDR5 memory bus in addition to a DDR3 memory bus. If they did they may as well do a quad-channel DDR3 controller.
 

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
Get a video card, iGPU is pointless for soo many!!!!!!! then theres cheapos with built in iGPU in it,,,, and its still slow haswell or howard, which ever way you look at it.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
It's typically a lot more efficient than DDR3 thanks to running at lower voltages.

GDDR5 is specified at 1.5V and 1.35V just like DDR3 (latter is often called DDR3L, is becoming more common even in laptops). You can also find both over volted in some parts.

I don't think there's anything to support it using more W/bit/S (power for a given bandwidth), the problem is if it's providing twice the bandwidth all the time. You end up with twice the data per pin and it uses more power for bandwidth you often won't need. This will generally just apply when the memory is used but it's still an impact.

As far as I'm aware normal PCs with either DDR3 or GDDR5 system memory is not going to be employing dynamic frequency scaling, much less dynamic voltage. Someone please correct me if I'm wrong about this, but I don't think this is something you will find in the memory controller.

Idontcare said:
The only question you've have to ask yourself is "why wasn't this the plan for Llano?"

I give AMD credit for knowing enough about what they do as to assume that they knew in advance that Llano, and its successors, would be critically dependent on ram bandwidth (and limited as such by DDR3).

Not sure if that was supposed to be a rhetorical question, but Kaveri will need the bandwidth way more than Llano. Because its compute, TMU, etc count increases far more than the system bandwidth increase by using faster DDR3. Llano was bandwidth limited a lot (most? all??) of the time but if using faster memory only gets you 10% more performance there and 100% more performance here then it becomes a very different scenario...

As I recall the actual performance of Rambus, especially on northwood P4 systems, was actually really good, and if it wasn't for the stupid licensing issues and price I think it would have worked out fine.

It looked good compared to SDR SDRAM which was the first alternative offered for lower end platforms (Celerons) but the real comparison was with DDR, when those chipsets inevitably came out.

RDRAM had better bandwidth but the latency was worse, and as usual for general CPU problems latency was more sensitive. This was especially true for Pentium 4 with its high clock speeds and typically low IPC. So RDRAM looked great on synthetic memory bandwidth tests but for real world applications wasn't nearly as useful and sometimes even worse.

Here's a good comparison:
http://techreport.com/review/3231/pentium-4-ddr-chipsets-compared

You can see the top positions in the benchmark is usually taken by SiS's chipset, although the differences between the DDR chipsets and i850, and often even i845 are usually not that big. You can see in hindsight that RDRAM wouldn't have been a good decision even if it was cheaper. It's true that i845D wasn't really that impressive, but my guess is that Intel was scrambling to get it to market once they realized i850 + RDRAM was a bust and SiS and VIA was eating their lunch with cheaper and better alternatives. Had Intel stuck with DDR from the start they would have had time to do a better chipset. Not sure what DDR availability was for Pentium 4's launch, I just know that you AMD's DDR supporting 760 chipset launched the same month, so it should have been realistic.

Intel already got burned by i820 on Pentium 3, and with Pentium 4 they had to put in a big financial stake subsidizing it just to avoid subjecting people to the big retail price Rambus wanted (they had bundles including memory with the CPU).
 
Last edited:

Chiropteran

Diamond Member
Nov 14, 2003
9,811
110
106
RDRAM had better bandwidth but the latency was worse, and as usual for general CPU problems latency was more sensitive. This was especially true for Pentium 4 with its high clock speeds and typically low IPC. So RDRAM looked great on synthetic memory bandwidth tests but for real world applications wasn't nearly as useful and sometimes even worse.

Here's a good comparison:
http://techreport.com/review/3231/pentium-4-ddr-chipsets-compared

You can see the top positions in the benchmark is usually taken by SiS's chipset, although the differences between the DDR chipsets and i850, and often even i845 are usually not that big. You can see in hindsight that RDRAM wouldn't have been a good decision even if it was cheaper. It's true that i845D wasn't really that impressive, but my guess is that Intel was scrambling to get it to market once they realized i850 + RDRAM was a bust and SiS and VIA was eating their lunch with cheaper and better alternatives. Had Intel stuck with DDR from the start they would have had time to do a better chipset. Not sure what DDR availability was for Pentium 4's launch, I just know that you AMD's DDR supporting 760 chipset launched the same month, so it should have been realistic.
.

book1_16297_image001.gif


This is what I was thinking of. It seemed like at the time the rambus + northwood was significantly faster than sdr + northwood. Remember rambus was around before DDR became a thing. If the price and licensing issues weren't so absurd I think it would have caught on, don't need DDR with performance like that.


Or here is a comparison with DDR, the RAMBUS system is still faster-

quake3_normal.gif
 
Last edited:

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
This is what I was thinking of. It seemed like at the time the rambus + northwood was significantly faster than sdr + northwood. Remember rambus was around before DDR became a thing. If the price and licensing issues weren't so absurd I think it would have caught on, don't need DDR with performance like that.

RDRAM may have been available before DDR, but it doesn't matter - AMD released DDR compatible hardware at the same time that Pentium 4 was released. So releasing Pentium 4 with a DDR chipset instead of an RDRAM one should have been feasible. That makes the whole SDRAM comparison moot. There would have still been SDRAM chipsets but still intended for lower price segments - and in this case the SDRAM chipset in question may have been lower quality than it could have been or even intentionally crippled.

A small advantage in Quake 3 when it's already getting > 200FPS doesn't matter; that advantage that shrinks to close to nothing when you use higher resolutions. Note that your comparison was also limited to i845D which was often behind the already released SiS and even VIA chipsets. Had Intel focused all their energy on a quality DDR chipset from the start the story could have been a bit different.

You'll always be able to find bandwidth bound situations like this, but they're pretty few and far between and definitely not worth using a proprietary memory technology which would have always been more expensive even if Rambus's pricing got more reasonable.

Intel hedged a lot of bets with RDRAM and probably stuck with it longer than they should have because of various obligations.
 

cytg111

Lifer
Mar 17, 2008
26,234
15,646
136
all i hear is "faster" .. thus I want it. Not in my main rig thou' .. have to max out the channels on that one.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Although 128MB is not much vRAM to brag about, it will be a game changer. Dedicated memory on 512bit bus which gives a memory bandwidth of 64GB/s is pretty big deal.

Tomshardware did a review on the 4770K, which have HD4600, saw a big increase in game performance over Ivy Bridge. Now, that is the IGP without dedicated vRAM, and its with 20EUs. The CPUs with eDRAM will have 40EUs (double amount) plus big bandwidth on the vRAM. Thats gonna be interesting to see.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Does anyone know what's the difference between eDRAM latency and GDDR5 latency?

I'm guessing much much lower (order of magnitude or better). Edram on Haswell is basically being used as an L4 cache so its gotta be very low latency compared to system ram. If you look purely at the distances, the ram banks are probably more than 10 times as far away from the cpu as the edram. I'd say at least two orders of magnitude but I'm not sure.
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
The only question you've have to ask yourself is "why wasn't this the plan for Llano?"

By that logic, couldn't we also ask why eDRAM wasn't used in Sandy Bridge or Ivy Bridge? Assuming Kaveri actually uses GDDR5 (I haven't seen that confirmed anywhere), it will probably be the first APU whose GPU requires it to prevent being significantly bottlenecked by memory bandwidth.

Good prices?

AMD is a nobody in the memory market, and the MB makers that will buy even smaller volumes will get good prices?

I smell blood here.

I will be surprised if there end up being any enthusiast boards with GDDR5. If the rumors are true I think we are talking about OEMs like HP and Lenovo buying GDDR5 in bulk.

I'm really interested in seeing how the eDRAM will affect CPU performance. I am thinking it won't have much of an effect (otherwise why not slap it on the unlocked enthusiast SKUs?) but I'd love to be wrong. Could add a lot of potency to some DTR laptops...
 
Last edited: