• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

WCCFAMD Carrizo APU on the 28nm Node Will Have Stacked DRAM On Package

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
There is also day and night difference on the cache design capabilties of AMD vs Intel. Plus they use a different cache design as well. So its silly to make a direct compare.

So are you saying that 32 MB will be sufficient for Intel? I.e. they won't run into the problems described by NTMBK and bunnyfubbles?
 
128MB modules tho.

The problem isnt the modules. Its the integration.

And with 128MB HBMs, its pretty sure 1GB for example isnt going to happen.

For the type of IGPs we have now,ie, HD7750 level,512MB is more than enough.

Even 256MB of fast VRAM would still show a benefit over using slower system RAM,plus I suspect a company could use an HM like arrangement if more VRAM is needed,but at lower performance levels.
 
128MB modules tho.

The problem isnt the modules. Its the integration.

And with 128MB HBMs, its pretty sure 1GB for example isnt going to happen.
It sounds more likely a typo to me as the product code states 8Gb otherwise, assuming the scheme is consistent.
 
Did I say that?

I thought all I said was you cant compare apples and oranges.

You did notice the question mark at the end of the sentence in my post, yes?

Regardless, are you saying that 32 MB will be sufficient for the iGPU on Intel mainstream desktop CPUs/APUs or not? Because previously in other threads you've said it should be sufficient, but looking at your latest comments now I'm not so sure what you think anymore. :hmm:
 
You did notice the question mark at the end of the sentence in my post, yes?

Regardless, are you saying that 32 MB will be sufficient for the iGPU on Intel mainstream desktop CPUs/APUs or not? Because previously in other threads you've said it should be sufficient, but looking at your latest comments now I'm not so sure what you think anymore. :hmm:

Intel have said 32MB is enough, but they are using 128MB as an extra safety and for future.

Microsoft also thinks 32MB is enough, hence the 32MB eSRAM in the Xbox One.
 
It sounds more likely a typo to me as the product code states 8Gb otherwise, assuming the scheme is consistent.

exactly. Hynix has so far only talked about 2Gbit chips in a 4 Hi config for 8 Gbit ( 1 Gbyte) in all their presentations. Hynix part decoder also uses 8G for 8Gbit. so that is a 8Gbit DRAM chip.

http://pc.watch.impress.co.jp/docs/column/kaigai/20140428_646233.html

http://www.microarch.org/micro46/files/keynote1.pdf

see page 44 of keynote presentation
 
Last edited:
Intel have said 32MB is enough, but they are using 128MB as an extra safety and for future.

Microsoft also thinks 32MB is enough, hence the 32MB eSRAM in the Xbox One.

eSRAM and eDRAM arent the same. And Microsoft's usage of eSRAM proved a nightmare for developers by the way.
 
128MB, not 64MB.

Source AnandTech:

It turns out that for current workloads, Intel didn’t see much benefit beyond a 32MB eDRAM however it wanted the design to be future proof. Intel doubled the size to deal with any increases in game complexity, and doubled it again just to be sure.
 
Source AnandTech:

It turns out that for current workloads, Intel didn’t see much benefit beyond a 32MB eDRAM however it wanted the design to be future proof. Intel doubled the size to deal with any increases in game complexity, and doubled it again just to be sure.

What happens if you double something, then double it again? It gets 4 times bigger... it's 128MB.
 
Adding a cache in this way goes against the concept of HSA. Why bother integrating at all? Why not just put the HBM on a gpu, on a proper video card, mounted in a PCIe slot? That is where I think we will first see HBM. It makes no sense to have it on an APU unless you can use it to eliminate the need for a rather costly external DRAM bus. And that means 4GB minimum. The good news is that cost reduction efforts for the PS4 are going to drive HBM into the APU eventually. There is just so much money to be saved by getting that 8GB of GDDR5 onto the package. But that might take 3 more years...
 
Adding a cache in this way goes against the concept of HSA. Why bother integrating at all? Why not just put the HBM on a gpu, on a proper video card, mounted in a PCIe slot? That is where I think we will first see HBM. It makes no sense to have it on an APU unless you can use it to eliminate the need for a rather costly external DRAM bus. And that means 4GB minimum. The good news is that cost reduction efforts for the PS4 are going to drive HBM into the APU eventually. There is just so much money to be saved by getting that 8GB of GDDR5 onto the package. But that might take 3 more years...

how does hbm even affect hsa? huma allows both cpu and gpu to access the main memory. No where does it say that the type of memory disables or reduces this feature.
 
Adding a cache in this way goes against the concept of HSA. Why bother integrating at all? Why not just put the HBM on a gpu, on a proper video card, mounted in a PCIe slot? That is where I think we will first see HBM. It makes no sense to have it on an APU unless you can use it to eliminate the need for a rather costly external DRAM bus. And that means 4GB minimum. The good news is that cost reduction efforts for the PS4 are going to drive HBM into the APU eventually. There is just so much money to be saved by getting that 8GB of GDDR5 onto the package. But that might take 3 more years...

PS4 is completely unrelated to HBM.
 
You did notice the question mark at the end of the sentence in my post, yes?

Regardless, are you saying that 32 MB will be sufficient for the iGPU on Intel mainstream desktop CPUs/APUs or not? Because previously in other threads you've said it should be sufficient, but looking at your latest comments now I'm not so sure what you think anymore. :hmm:
As a cache, 32-128MB is fine. Nowhere has anyone argued otherwise.
 
Adding a cache in this way goes against the concept of HSA. Why bother integrating at all? Why not just put the HBM on a gpu, on a proper video card, mounted in a PCIe slot? That is where I think we will first see HBM. It makes no sense to have it on an APU unless you can use it to eliminate the need for a rather costly external DRAM bus. And that means 4GB minimum. The good news is that cost reduction efforts for the PS4 are going to drive HBM into the APU eventually. There is just so much money to be saved by getting that 8GB of GDDR5 onto the package. But that might take 3 more years...

You might think so. But, to the best of my knowledge, HSA might allow sharing of a massive L4 cache on-die (or in-package) just as easily as it shares main memory. Considering the fact that PCI-e devices are already dealing with the latency of the APU's memory controller under HSA, forcing them to go to the die to read from/write to an L4 cache isn't going to be any worse. It will be an improvement for on-die/in-package devices just as it should improve latency for your PCI-e computer devices. So, it's a win all around where shared memory space is concerned.

If Carrizo is to be smaller than Kaveri, it may be that Carrizo will have fewer shaders than the 7850K. Maybe 256 or 384 of them?
 
Adding a cache in this way goes against the concept of HSA. Why bother integrating at all? Why not just put the HBM on a gpu, on a proper video card, mounted in a PCIe slot? That is where I think we will first see HBM. It makes no sense to have it on an APU unless you can use it to eliminate the need for a rather costly external DRAM bus. And that means 4GB minimum. The good news is that cost reduction efforts for the PS4 are going to drive HBM into the APU eventually. There is just so much money to be saved by getting that 8GB of GDDR5 onto the package. But that might take 3 more years...

http://www.google.com/patents/US20130346695

this is suggesting the potential for an APU that will combine L3 cache and memory controller, so ~1GB of HBM as a massive pool of cache and you then still have DRAM to supplement

I'd think it would be pretty obvious as to why AMD would want to do this now instead of waiting until they can get 4+GB. The sooner AMD can put out APUs that can challenge midrange dGPUs, the sooner they have a product that Intel really can't compete with in its intended area, all while cuting out nVidia from even having a chance to compete because there is no longer a need for a dGPU (outside of high end), not unlike how companies like nVidia and VIA lost out on business when the memory controller was integrated into the CPU, effectively destroying the chipset market as we knew it by eliminating the primary purpose of the northbridge.
 
Back
Top