"Hybrid memory cube" - on-CPU memory (sort of)

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

rhfish

Junior Member
Feb 1, 2012
1
0
0
www.venraytechnology.com
There are other ways around the Memory Wall.

Regards,
Russell

Self-promotion is not allowed at the AnandTech Forums, sorry.

Administrator Idontcare
 
Last edited by a moderator:

sm625

Diamond Member
May 6, 2011
8,172
137
106
The problem is the core cpu uArch needs to be completely redesigned to take advantage of the type of memory bandwidth this enables. You're talking about eliminating the need for an L3 cache, since this would become your L3 cache. Possibly even your L2 cache. Throw a gpu in there and it gets even more complicated, since both the cpu and gpu will be sharing this memory. Games would need to be completely redesigned so that more data can be fed straight to the gpu, and so that the cpu and gpu could work on the same data sets without copying or moving the data around.

I wish some tech site would do a "The Life of a Texture" article so we could see just how much time is wasted copying and moving data vs how much actual processing is done.
 

bronxzv

Senior member
Jun 13, 2011
460
0
71
It looks like it's a three-dimensional silicon structure in which you take flash memory, DRAM memory and then stick them onto a CPU like a miniature silicon chip sandwidth and then connect them with a technology called thru-silicon vias (TSV).

I just received this link: Wanted: 3-D IC standards within six months

it looks like there is a lot of activity lately on 3D stacked RAM and high bandwidth TSVs links, page 2 says that "Jedec has two groups working on next-generation Wide I/O standards" (terabit/second rates) so there is already some standardization efforts going on beyond the mobile applications using stacked RAM already

one question is in which timeframe will we get such solutions on desktop chips, is it something we will get already next year on Haswell ? If yes, which kind of bandwidth can we expect ?
 
Last edited:

lol123

Member
May 18, 2011
162
0
0
HMC is a new way of making DRAM chips, not integrated memory. It could (and probably will) be added to CPUs as a separate layer of silicon, but so could the standard DRAM that we have today. The main application for HMC memory will probably be traditional DIMM sticks.

There's also no real reason that HMC would require a complete redesign of CPUs and software. It's just a bigger leap in bandwidth and power consumption than what we have seen in the transition from DDR to DDR2 and from DDR2 to DDR3, for example.
 

cotak13

Member
Nov 10, 2010
129
0
0
HMC is a new way of making DRAM chips, not integrated memory. It could (and probably will) be added to CPUs as a separate layer of silicon, but so could the standard DRAM that we have today. The main application for HMC memory will probably be traditional DIMM sticks.

There's also no real reason that HMC would require a complete redesign of CPUs and software. It's just a bigger leap in bandwidth and power consumption than what we have seen in the transition from DDR to DDR2 and from DDR2 to DDR3, for example.

Exactly. HMC is more like moving the PHY and controller out of the CPU or whatever ASIC that uses memory, multiplying it by a huge number and stacking it under the DRAM chip. What you get is a sudden jump in bandwidth by it's 3D nature allowing more parallelism. Also it's got a much better energy profile than existing "flat" DRAM. However don't expect it to grow by leaps and bounds over and over. It's a way forward but underneath you are still stuck with the old DRAM technology that has been the bottle neck for a while. In many ways you can think of HMC as taking multiple channel DDR RAM and taking it to the logical conclusion of how much more parallel you can make it.