I don't know exactly what the performance profile was, but the performance isn't going to from 60+ FPS to unplayable for a current IGP with shared memory, even if it were to go from using the local memory well to not at all. I was more thinking more along the lines that it wasn't a new hurdle software-wise, but if it was so botched as to be useless in the past then it might not make a difference.
Of course Intel will not see such a hit with Crystalwell because the main memory still is fairly low latency, I was talking about discrete GPUs. But imho that local framebuffer either has to be targeted discretely or its performance improvement will not be that large because it will be trashed ex. by textures. And I just don't see game developers going to that extent for such a small target group. And depending on Intels driver team is, well...
Then why do GDDR5 versions have higher TDP ratings? Just as another blurb from another AT article "Typically we see GDDR5 cards sport a higher TDP thanks to the memory’s higher power consumption, and this would be further driven up by the fact that the GTX 650 is clocked higher than the GT 640."
http://www.anandtech.com/show/6289/nvidia-launches-geforce-gtx-650-gk107-with-gddr5
I think AT is wrong on this one. There aren't many cards with similar memory size and core clock, but those who are comparable (like the DDR3 GT550m vs. GDDR5 GT555m) are also sporting the same TDP. And actual power consumption tests (if you can find some) aren't favoring the DDR3 cards either, especially if you take the performance difference into account.
Isn't it pretty typical for the same HSF to cover the GPU and the RAM? Even if the GDDR5 really is bare that doesn't mean there's no power consumption issue, afterall they tend to distribute that power over a lot of chips...
There are also cards with uncovered ram chips on their back and I've torn apart some cards like an HIS icecool 5770 which seemed to cover the chips with its heatsink, but had no thermal pad on the chips and a gap of about 1mm between them. Card was still running fine, even oced.
Also, DDR3 distributes its heat over a similar amount of chips (I think DDR3 is one step ahead in package density, but that's it).
A > 2x cost for RAM is pretty huge. Maybe if you only care about 4GB. I wouldn't even consider a laptop with only 4GB of RAM. And it's not like there aren't plenty of Trinity laptops with 8GB so that market does exist.
MCM with on-package RAM isn't the only alternative to what AMD is doing. They could have had separate DDR3 and GDDR5 buses. That'd have increased the cost and complexity of the APU but decreased the cost and excess power consumption of the RAM. I don't really know if it would have been worth it or not.
Well you have to compare the cost to alternatives. The difference between 'low speed' 3.6 Ghz GDDR5 and 2.13 Ghz DDR3 can't be very high since AMD and Nvidia even started to switch entry level cards from 1.8 Ghz DDR3 to 4Ghz GDDR5.
You could also aim for a single channel GDDR5 configuration that would still be faster than dual channel DDR3. And you could delay the need to introduce DDR4 until its price comes down.
GDDR5 really isn't a bad option as an intermediate step between bandwith starved DDR3 solutions and high capacity MCMs (which will probably take over, but not within the next 2-3 years due to complexity and cost). It's low complexity for the amount of bandwith it provides, it's well understood, abundant and fairly flexible.