GDDR5 RAM vs On-Package Cache RAM to improve IGP performance?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I've read that it's part of the dev kit for PS4 (it was available on the PS3, IIRC), but support for OpenCL is already there for AMD's APUs, so I expect that some developers will choose to use it. Why have an APU and a GFX card unless you have some plan to use the iGPU on the APU?

The entire eco system needs to support it. OpenCL got as much to do with hardware as Java and .Net.
 

Piroko

Senior member
Jan 10, 2013
905
79
91
I don't know exactly what the performance profile was, but the performance isn't going to from 60+ FPS to unplayable for a current IGP with shared memory, even if it were to go from using the local memory well to not at all. I was more thinking more along the lines that it wasn't a new hurdle software-wise, but if it was so botched as to be useless in the past then it might not make a difference.
Of course Intel will not see such a hit with Crystalwell because the main memory still is fairly low latency, I was talking about discrete GPUs. But imho that local framebuffer either has to be targeted discretely or its performance improvement will not be that large because it will be trashed ex. by textures. And I just don't see game developers going to that extent for such a small target group. And depending on Intels driver team is, well...

Then why do GDDR5 versions have higher TDP ratings? Just as another blurb from another AT article "Typically we see GDDR5 cards sport a higher TDP thanks to the memory’s higher power consumption, and this would be further driven up by the fact that the GTX 650 is clocked higher than the GT 640."
http://www.anandtech.com/show/6289/nvidia-launches-geforce-gtx-650-gk107-with-gddr5
I think AT is wrong on this one. There aren't many cards with similar memory size and core clock, but those who are comparable (like the DDR3 GT550m vs. GDDR5 GT555m) are also sporting the same TDP. And actual power consumption tests (if you can find some) aren't favoring the DDR3 cards either, especially if you take the performance difference into account.

Isn't it pretty typical for the same HSF to cover the GPU and the RAM? Even if the GDDR5 really is bare that doesn't mean there's no power consumption issue, afterall they tend to distribute that power over a lot of chips...
There are also cards with uncovered ram chips on their back and I've torn apart some cards like an HIS icecool 5770 which seemed to cover the chips with its heatsink, but had no thermal pad on the chips and a gap of about 1mm between them. Card was still running fine, even oced.
Also, DDR3 distributes its heat over a similar amount of chips (I think DDR3 is one step ahead in package density, but that's it).

A > 2x cost for RAM is pretty huge. Maybe if you only care about 4GB. I wouldn't even consider a laptop with only 4GB of RAM. And it's not like there aren't plenty of Trinity laptops with 8GB so that market does exist.

MCM with on-package RAM isn't the only alternative to what AMD is doing. They could have had separate DDR3 and GDDR5 buses. That'd have increased the cost and complexity of the APU but decreased the cost and excess power consumption of the RAM. I don't really know if it would have been worth it or not.
Well you have to compare the cost to alternatives. The difference between 'low speed' 3.6 Ghz GDDR5 and 2.13 Ghz DDR3 can't be very high since AMD and Nvidia even started to switch entry level cards from 1.8 Ghz DDR3 to 4Ghz GDDR5.
You could also aim for a single channel GDDR5 configuration that would still be faster than dual channel DDR3. And you could delay the need to introduce DDR4 until its price comes down.

GDDR5 really isn't a bad option as an intermediate step between bandwith starved DDR3 solutions and high capacity MCMs (which will probably take over, but not within the next 2-3 years due to complexity and cost). It's low complexity for the amount of bandwith it provides, it's well understood, abundant and fairly flexible.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I think AT is wrong on this one. There aren't many cards with similar memory size and core clock, but those who are comparable (like the DDR3 GT550m vs. GDDR5 GT555m) are also sporting the same TDP. And actual power consumption tests (if you can find some) aren't favoring the DDR3 cards either, especially if you take the performance difference into account.

specs02.jpg


Thats a pretty big difference.
 

Piroko

Senior member
Jan 10, 2013
905
79
91
The 5550 and 5570 have the same TDP no matter the configuration, and TDP for other cards is undisclosed/varying on target market and form factor. Also, tests of the GDDR5 6570 put its actual consumption at 42W or lower. Besides, 12W would be near impossible to dissipate from 4 bga Ram chips without heatsinks and on the images they shot you can't see any remains of thermal grease/thermal pads on the ram chips. That difference is just market segmentation, not a technical reason.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
The only difference between those 2 cards is DDR3 vs GDDR5. And its 1800Mhz DDR3 vs 4Ghz GDDR5. Yet board TDP is 36% higher under maximum load and 10% higher under idle. Assuming the DDR3 draws 4W and the GDDR5 draws 20W under peak load.

GDDR is simply a power hog. Speed over power consumption.
 

Piroko

Senior member
Jan 10, 2013
905
79
91
The only difference between those 2 cards is DDR3 vs GDDR5. And its 1800Mhz DDR3 vs 4Ghz GDDR5. Yet board TDP is 36% higher under maximum load and 10% higher under idle. Assuming the DDR3 draws 4W and the GDDR5 draws 20W under peak load.

GDDR is simply a power hog. Speed over power consumption.
http://www.anandtech.com/show/2906/14
Explain this. Add that the GDDR5 version is doing between 20 and 50% more work (higher fps).
 

mikk

Diamond Member
May 15, 2012
4,302
2,383
136
That cool. CUDA will likely dominate in high end professional markets for sometime, but I think OpenCL will be the tool of choice for the desktop. If Intel is still on track for introducing a new iGPU architecture for Broadwell, I imagine that they will likely provide support for OpenCL 1.2. Good news all around.


Intel already supports OpenCL 1.2 on GPU and CPU with Ivy Bridge.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
Bingo. Intel chose the option they did for they control their fabs and they can make it happen. AMD no longer has its fabs so it must go for the more "commodity" option which is buying gddr from a source such as Samsung.

AMD were lucky this time, because if they had the option they would surely have used valuable and scarce ressources developing for somehing that is far better handled outside of their business.

GDDR5 is a no brainer. Its risk free, scalable, dont burden the organization, takes no time to develop, and builds on existing ip and competences.

The added variable cost of the solution is easily saved by the above, plus getting the entire solution soldered on, is the only way to go forward, as it leaves the oem using time for something more interesting than uninteresting technical decisions.
 

NTMBK

Lifer
Nov 14, 2011
10,448
5,831
136
Sorry 'bout that, I didn't really phrase that very well. Technically it's perfectly plausible - it's already been done on multiple discrete cards after all. I was just expressing my uncertainty regarding whether that single 'report' from BSN of Kaveri supporting both DDR3/GDDR5 is accurate in part due to increased complexity of such designs. The differences between DDR3 and GDDR5 are more than enough that they'd have to give their memory controller a pretty hefty overhaul - it's not as simple as just reusing the discrete GPU design. They very well could feel that they need to do so though in order to remain competitive.

They already have an x86-CPU-and-GCN-GPU combined GDDR5 memory controller- it's used in the PS4.
 

lagokc

Senior member
Mar 27, 2013
808
1
41
So by Black Friday 2013 AMD should actually finally have ultrabooks capable of running games at reasonable speed without the need to spend space on a standalone GPU?

Sure, there are plenty of drawbacks to the solution and it isn't particularly elegant but for that particular market where people expect to not be able to upgrade the ultrabook anyway this might actually work well.
 

Piroko

Senior member
Jan 10, 2013
905
79
91
Besides the obvious memory amount and PCB difference?
Ayup. According to your former calculation based on TDP that card should have no chance at all of running within 10 Watts of its DDR3 silbling. Especially considering it's doing a lot more work while running hotter due to a different fan profile. Want to push all of that on 512MB GDDR5?

Also, by your own calculation a Geforce Titan would use 20W times 6 (6 GB) times 50% higher clocks (6 Ghz <-> 4 Ghz) equalling a rough estimation of 180W on Ram alone, with a TDP of 225W.
I'm not even pulling up the GTX650 vs. GT640 that was posted earlier in this thread. GDDR5 isn't a miracle in power consumption, but your estimation is just off base.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Ayup. According to your former calculation based on TDP that card should have no chance at all of running within 10 Watts of its DDR3 silbling. Especially considering it's doing a lot more work while running hotter due to a different fan profile. Want to push all of that on 512MB GDDR5?

Also, by your own calculation a Geforce Titan would use 20W times 6 (6 GB) times 50% higher clocks (6 Ghz <-> 4 Ghz) equalling a rough estimation of 180W on Ram alone, with a TDP of 225W.
I'm not even pulling up the GTX650 vs. GT640 that was posted earlier in this thread. GDDR5 isn't a miracle in power consumption, but your estimation is just off base.

My calculation? You mean AMD in this case.

When AMD sets the board power to 60W on a GDDR5 card and 44W on an identical DDR3 card. What uses those 16W? Or is AMD wrong?
 

lagokc

Senior member
Mar 27, 2013
808
1
41
My calculation? You mean AMD in this case.

When AMD sets the board power to 60W on a GDDR5 card and 44W on an identical DDR3 card. What uses those 16W? Or is AMD wrong?

Think about it. A GPU sitting around waiting on memory before it can get to work is consuming a lot less power than a GPU working hard because it's actually being fed by memory.
 

Piroko

Senior member
Jan 10, 2013
905
79
91
Since when is TDP a mandatory power consumption? They might have kept the TDP intentionally higher for a 2GB card or a higher clocked Bios (see 7950).

I've given enough evidence that GDDR5 does not use a ridiculous amount of power like 20W per GB of 4 Ghz Chips. A Mobility Radeon 5750 has a 25W TDP despite 1Gb of 3200Mhz GDDR5. Some of the DDR3 variants actually are able to shave off 2-3W TDP by using DDR3, but at the same time they're quite a bit lower performing. It's hard to assign any lower consumption to DDR3 when the overall utilization also drops.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
The entire eco system needs to support it. OpenCL got as much to do with hardware as Java and .Net.

not quite sure what you are saying here. The Dev Kit is the ecosystem fot the PS4. And OpenCL is much closer to the metal than platforms that run on a VM are.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
not quite sure what you are saying here. The Dev Kit is the ecosystem fot the PS4. And OpenCL is much closer to the metal than platforms that run on a VM are.

OpenCL gets translated on the fly with the driver to the hardware. Just like Java and .Net. Or DirectX, DirectCompute and so on. OpenCL is also on top of CUDA on nVidia as well.

Its certainly not closer to bare metal.

AMD_IL_APP.jpg
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Regarding this specific article, what is it you think is not true?

A BSN article isn't worth my time to even click on. That's how much of a joke they are. But if it assists your confirmation bias then have at it.
 

Fjodor2001

Diamond Member
Feb 6, 2010
4,224
589
126
A BSN article isn't worth my time to even click on. That's how much of a joke they are. But if it assists your confirmation bias then have at it.

Stick to the facts instead. Please tell us what of the info I quoted you think was incorrect. Because that's what matters.
 
Mar 10, 2006
11,715
2,012
126

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Stick to the facts instead. Please tell us what of the info I quoted you think was incorrect. Because that's what matters.

It just doesn't make financial sense to use GDDR5 as system RAM. Look in the memory and storage forum, people are freaking out about standard DDR3 memory prices rising over their historic lows. People aren't going to pay several times current prices to put memory in a budget system.

Do you remember when the P4 required RDRAM? That didn't work out so well for Intel. If Intel can't push expensive memory, AMD certainly can't.

And how is AMD going to sell their customers? "Hey Dell, HP, Lenovo, build systems using our low price CPU. Ignore the fact that the price difference plus more is eaten up by the cost of memory. Just pass it on to your customers, they will be fine with it because it has faster graphics than Intel".

If AMD tries to go down this route they will just hasten their decline.