WCCFAMD Carrizo APU on the 28nm Node Will Have Stacked DRAM On Package

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Fjodor2001

Diamond Member
Feb 6, 2010
3,989
440
126
There is also day and night difference on the cache design capabilties of AMD vs Intel. Plus they use a different cache design as well. So its silly to make a direct compare.

So are you saying that 32 MB will be sufficient for Intel? I.e. they won't run into the problems described by NTMBK and bunnyfubbles?
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
128MB modules tho.

The problem isnt the modules. Its the integration.

And with 128MB HBMs, its pretty sure 1GB for example isnt going to happen.

For the type of IGPs we have now,ie, HD7750 level,512MB is more than enough.

Even 256MB of fast VRAM would still show a benefit over using slower system RAM,plus I suspect a company could use an HM like arrangement if more VRAM is needed,but at lower performance levels.
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
Start adding in things like Z buffers and G buffers, and see how far that 32MB gets you...

not very far, which is why stacked memory is going to be much bigger and be more than just a cache or 2D buffer
 
Last edited:

pTmdfx

Member
Feb 23, 2014
85
0
0
128MB modules tho.

The problem isnt the modules. Its the integration.

And with 128MB HBMs, its pretty sure 1GB for example isnt going to happen.
It sounds more likely a typo to me as the product code states 8Gb otherwise, assuming the scheme is consistent.
 

Fjodor2001

Diamond Member
Feb 6, 2010
3,989
440
126
Did I say that?

I thought all I said was you cant compare apples and oranges.

You did notice the question mark at the end of the sentence in my post, yes?

Regardless, are you saying that 32 MB will be sufficient for the iGPU on Intel mainstream desktop CPUs/APUs or not? Because previously in other threads you've said it should be sufficient, but looking at your latest comments now I'm not so sure what you think anymore. :hmm:
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
You did notice the question mark at the end of the sentence in my post, yes?

Regardless, are you saying that 32 MB will be sufficient for the iGPU on Intel mainstream desktop CPUs/APUs or not? Because previously in other threads you've said it should be sufficient, but looking at your latest comments now I'm not so sure what you think anymore. :hmm:

Intel have said 32MB is enough, but they are using 128MB as an extra safety and for future.

Microsoft also thinks 32MB is enough, hence the 32MB eSRAM in the Xbox One.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
It sounds more likely a typo to me as the product code states 8Gb otherwise, assuming the scheme is consistent.

exactly. Hynix has so far only talked about 2Gbit chips in a 4 Hi config for 8 Gbit ( 1 Gbyte) in all their presentations. Hynix part decoder also uses 8G for 8Gbit. so that is a 8Gbit DRAM chip.

http://pc.watch.impress.co.jp/docs/column/kaigai/20140428_646233.html

http://www.microarch.org/micro46/files/keynote1.pdf

see page 44 of keynote presentation
 
Last edited:

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Intel have said 32MB is enough, but they are using 128MB as an extra safety and for future.

Microsoft also thinks 32MB is enough, hence the 32MB eSRAM in the Xbox One.

But Intel doubled to 64MB to be future proof.
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
Intel have said 32MB is enough, but they are using 128MB as an extra safety and for future.

Microsoft also thinks 32MB is enough, hence the 32MB eSRAM in the Xbox One.

eSRAM and eDRAM arent the same. And Microsoft's usage of eSRAM proved a nightmare for developers by the way.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
128MB, not 64MB.

Source AnandTech:

It turns out that for current workloads, Intel didn’t see much benefit beyond a 32MB eDRAM however it wanted the design to be future proof. Intel doubled the size to deal with any increases in game complexity, and doubled it again just to be sure.
 

NTMBK

Lifer
Nov 14, 2011
10,297
5,289
136
Source AnandTech:

It turns out that for current workloads, Intel didn’t see much benefit beyond a 32MB eDRAM however it wanted the design to be future proof. Intel doubled the size to deal with any increases in game complexity, and doubled it again just to be sure.

What happens if you double something, then double it again? It gets 4 times bigger... it's 128MB.
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
Adding a cache in this way goes against the concept of HSA. Why bother integrating at all? Why not just put the HBM on a gpu, on a proper video card, mounted in a PCIe slot? That is where I think we will first see HBM. It makes no sense to have it on an APU unless you can use it to eliminate the need for a rather costly external DRAM bus. And that means 4GB minimum. The good news is that cost reduction efforts for the PS4 are going to drive HBM into the APU eventually. There is just so much money to be saved by getting that 8GB of GDDR5 onto the package. But that might take 3 more years...
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Adding a cache in this way goes against the concept of HSA. Why bother integrating at all? Why not just put the HBM on a gpu, on a proper video card, mounted in a PCIe slot? That is where I think we will first see HBM. It makes no sense to have it on an APU unless you can use it to eliminate the need for a rather costly external DRAM bus. And that means 4GB minimum. The good news is that cost reduction efforts for the PS4 are going to drive HBM into the APU eventually. There is just so much money to be saved by getting that 8GB of GDDR5 onto the package. But that might take 3 more years...

how does hbm even affect hsa? huma allows both cpu and gpu to access the main memory. No where does it say that the type of memory disables or reduces this feature.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Adding a cache in this way goes against the concept of HSA. Why bother integrating at all? Why not just put the HBM on a gpu, on a proper video card, mounted in a PCIe slot? That is where I think we will first see HBM. It makes no sense to have it on an APU unless you can use it to eliminate the need for a rather costly external DRAM bus. And that means 4GB minimum. The good news is that cost reduction efforts for the PS4 are going to drive HBM into the APU eventually. There is just so much money to be saved by getting that 8GB of GDDR5 onto the package. But that might take 3 more years...

PS4 is completely unrelated to HBM.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
You did notice the question mark at the end of the sentence in my post, yes?

Regardless, are you saying that 32 MB will be sufficient for the iGPU on Intel mainstream desktop CPUs/APUs or not? Because previously in other threads you've said it should be sufficient, but looking at your latest comments now I'm not so sure what you think anymore. :hmm:
As a cache, 32-128MB is fine. Nowhere has anyone argued otherwise.
 

DrMrLordX

Lifer
Apr 27, 2000
21,991
11,541
136
Adding a cache in this way goes against the concept of HSA. Why bother integrating at all? Why not just put the HBM on a gpu, on a proper video card, mounted in a PCIe slot? That is where I think we will first see HBM. It makes no sense to have it on an APU unless you can use it to eliminate the need for a rather costly external DRAM bus. And that means 4GB minimum. The good news is that cost reduction efforts for the PS4 are going to drive HBM into the APU eventually. There is just so much money to be saved by getting that 8GB of GDDR5 onto the package. But that might take 3 more years...

You might think so. But, to the best of my knowledge, HSA might allow sharing of a massive L4 cache on-die (or in-package) just as easily as it shares main memory. Considering the fact that PCI-e devices are already dealing with the latency of the APU's memory controller under HSA, forcing them to go to the die to read from/write to an L4 cache isn't going to be any worse. It will be an improvement for on-die/in-package devices just as it should improve latency for your PCI-e computer devices. So, it's a win all around where shared memory space is concerned.

If Carrizo is to be smaller than Kaveri, it may be that Carrizo will have fewer shaders than the 7850K. Maybe 256 or 384 of them?
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
Adding a cache in this way goes against the concept of HSA. Why bother integrating at all? Why not just put the HBM on a gpu, on a proper video card, mounted in a PCIe slot? That is where I think we will first see HBM. It makes no sense to have it on an APU unless you can use it to eliminate the need for a rather costly external DRAM bus. And that means 4GB minimum. The good news is that cost reduction efforts for the PS4 are going to drive HBM into the APU eventually. There is just so much money to be saved by getting that 8GB of GDDR5 onto the package. But that might take 3 more years...

http://www.google.com/patents/US20130346695

this is suggesting the potential for an APU that will combine L3 cache and memory controller, so ~1GB of HBM as a massive pool of cache and you then still have DRAM to supplement

I'd think it would be pretty obvious as to why AMD would want to do this now instead of waiting until they can get 4+GB. The sooner AMD can put out APUs that can challenge midrange dGPUs, the sooner they have a product that Intel really can't compete with in its intended area, all while cuting out nVidia from even having a chance to compete because there is no longer a need for a dGPU (outside of high end), not unlike how companies like nVidia and VIA lost out on business when the memory controller was integrated into the CPU, effectively destroying the chipset market as we knew it by eliminating the primary purpose of the northbridge.