Intel Readies Adoption of AMD HBM Stacked RAM Tech

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tsavo

Platinum Member
Sep 29, 2009
2,645
37
91
Parallel has failed to show vast speed gains in the past...

Context is key.

http://imgur.com/8uvW5sD
8uvW5sD
 

III-V

Senior member
Oct 12, 2014
678
1
41
Yeah, it really depends. Memory's gotten way more parallel -- as evidenced by DDR4's 260 pins vs. DDR3's 204 pins, and stacked memory. So many pins...

...but it's running out of steam. HMC is serial. The "next DDR" is expected to be serial. There's company called MoSys with a really interesting memory tech called Bandwidth Engine -- also serial, with a tRC of 2.7ns!
Exactly, that's why everything has moved to serialized interfaces. Clocking parallel interfaces is too slow.
It's not so much that it's too slow; it's more difficult, which leads to slowness. Cross talk becomes a PITA, and you have to limit wire lengths more and more as you bump up clock speed. But that's semantics, really.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Is that your entire rationale?

HMC:
SK Hynix, Micron, Samsung.

HBM:
SK Hynix

Did I miss anything?

Reminds me of the Video Format War:

Studios supporting Blu-Ray :
Everyone minus Universal

Studios Supporting HD DVD :
Universal and WB

Conclusion: HD DVD is going to win. [Note: yes above is over simplification, but makes the same point.]
 

Khato

Golden Member
Jul 15, 2001
1,251
321
136
No matter the outcome..I want to be able to push tiny memory blocks into my mboard...not some huge sticks belonging to an old standard that has only questionable performance gains over the years.

Haha, that's certainly an amusing notion. With any of the above options though it's more along the lines of Intel gaining yet another segmenting opportunity, "Oh, you want 32 GB of system memory instead of 16 GB? Okay, here's your i7 since the i5 tops out at 16GB." Because they're pretty much all on-package type memory technologies - I believe HMC is the most tolerant of distance and I doubt that even it would care much for having to go through a socket and motherboard traces.

Of course I doubt that'll happen any time soon, and it may well be the case that Intel will continue down the route of using an L4 cache for graphics bandwidth requirements rather than switch the entire system ram over.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,649
2,472
136
I want to be able to push tiny memory blocks into my mboard

You absolutely won't be able to. Most of the whole point of the performance increases is massive reduction in bump size and capacitance. This means that all memory must be very tightly aligned (better than 0.01mm) and soldered into the device it's attached to. A future of HBM is a future of "memory is an integral part of the CPU".
 

oobydoobydoo

Senior member
Nov 14, 2014
261
0
0
"Hybrid Memory Cube"...


What is the significance of the "cube" part of this? Is it not, in fact, merely cuboid and would more correctly be termed a hexahedron? I don't see anything cubic about the marketing photos. Moreover, I don't see any reason why its shape is significant at all.



Why intel must invent its own terminology for everything and then make it so manifestly obscure and irrelevant in nature is beyond me. It does not reflect well on their marketing team or the company as a whole and it makes everyone confused. I would never have guessed in a thousand years, that they would take the vaguely squarish shape of a "chip" and call it a "cube", or why it even matters. It is clearly the exact same technology AMD has already developed.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
"Hybrid Memory Cube"...


What is the significance of the "cube" part of this? Is it not, in fact, merely cuboid and would more correctly be termed a hexahedron? I don't see anything cubic about the marketing photos. Moreover, I don't see any reason why its shape is significant at all.



Why intel must invent its own terminology for everything and then make it so manifestly obscure and irrelevant in nature is beyond me. It does not reflect well on their marketing team or the company as a whole and it makes everyone confused. I would never have guessed in a thousand years, that they would take the vaguely squarish shape of a "chip" and call it a "cube", or why it even matters. It is clearly the exact same technology AMD has already developed.

What has Intel to do with it?

You should take a look here and reconsider who you want to "blame":
http://hybridmemorycube.org/about.html

hybrid-cube-memory-spec.jpg
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
4,109
537
126
Lots of talk about this for years. When will we see it in actual products?

AMD Zen, Intel Cannonlake, ...?
 

bronxzv

Senior member
Jun 13, 2011
460
0
71
"Hybrid Memory Cube"...

What is the significance of the "cube" part of this? Is it not, in fact, merely cuboid and would more correctly be termed a hexahedron? I don't see

a stack of rectangular dices *is a cuboid*

FYI neither AMD nor Intel invented stacked RAM with TSVs
 

Shehriazad

Senior member
Nov 3, 2014
555
2
46
Lots of talk about this for years. When will we see it in actual products?

AMD Zen, Intel Cannonlake, ...?

AMDs next GPU series is supposed to have it (H2 2015?)...and after that...who knows.
AMD would be stupid to not use it in future APU generations(after Carrizo)...since it's way faster than DDR3 or even DDR4, putting 1-3Gigs of that on your APU would probably give it the bandwidth boost it needs.

As for Intel...who knows. If in future they decide to make their iGPU more important then I can see it happening for sure.

Nvidia plans to use HBM in their 2016 GPU series Pascal.

And anything about ARM using it would just be speculation at this point...but if its successful.....then I'd guess it's going to be used everywhere by 2018+ (including some possible attempts at using it as system memory). If it becomes a lasting standard just depends on if people adopt it/if something way better gets released and USED by the market anytime soon.
 
Last edited:

el etro

Golden Member
Jul 21, 2013
1,584
14
81
are we back to ddr vs rdram again?

No. HBM, Wide I/O(that is the same as HBM but smaller and focused on low-power devices) and HMC target different power consumption, size and cost targets. :thumbsup:

In order:

Wide I/O: Mobile devices, substitues LPDDR3 and LPDDR4, but can be used on Mobile and desktop PCs;

HBM: High end graphics cards, substitutes GDDR5. Also can substitute DDR3/DDR4 on PCs;

HMC: Top-End CPU/GPU Processors, targets devices with huge power targets and where the memory subsystem demand is the highest.


IMO, Wide I/O will "win" in the long run.
 
Last edited:

Khato

Golden Member
Jul 15, 2001
1,251
321
136
Will they be that big? I.e. big enough to make other system RAM (DDR3/4) unnecessary?

There's certainly the potential for such given the time frame. Note that I'd expect this to be ~3 years out on the Intel side at least since their eDRAM solution negates an immediate need for graphics bandwidth. And by then I wouldn't be surprised if a single HMC/HBM module gets up to the 4-8 GB range, at which point it's just 4 modules per package. Note that Intel's Knights Landing which will supposedly be available this year is making use of up to 16GB of what's effectively HMC memory - will be quite interesting to see what the configuration of such.

But it's adequately far out still that the landscape may well change dramatically. Could be that one of the NVM replacements for NAND takes off and we then have a 4GB or 8GB HMC/HBM 'cache' on the CPU with our 1TB main memory/storage plugged into a pair of DIMM slots ;)
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
"Hybrid Memory Cube"...


What is the significance of the "cube" part of this? Is it not, in fact, merely cuboid and would more correctly be termed a hexahedron? I don't see anything cubic about the marketing photos. Moreover, I don't see any reason why its shape is significant at all.



Why intel must invent its own terminology for everything and then make it so manifestly obscure and irrelevant in nature is beyond me. It does not reflect well on their marketing team or the company as a whole and it makes everyone confused. I would never have guessed in a thousand years, that they would take the vaguely squarish shape of a "chip" and call it a "cube", or why it even matters. It is clearly the exact same technology AMD has already developed.

Yeesh, I bet all this talk of Samsung's "14nm" process node must really set you off! :D :p
 

imported_ats

Senior member
Mar 21, 2008
422
63
86
Haha, that's certainly an amusing notion. With any of the above options though it's more along the lines of Intel gaining yet another segmenting opportunity, "Oh, you want 32 GB of system memory instead of 16 GB? Okay, here's your i7 since the i5 tops out at 16GB." Because they're pretty much all on-package type memory technologies - I believe HMC is the most tolerant of distance and I doubt that even it would care much for having to go through a socket and motherboard traces.

Of course I doubt that'll happen any time soon, and it may well be the case that Intel will continue down the route of using an L4 cache for graphics bandwidth requirements rather than switch the entire system ram over.

FYI, HMC handles sockets and motherboard traces just fine. In fact, the number one user of it ATM is using sockets and motherboard traces: Fujitsu high end super computers.
 

flash-gordon

Member
May 3, 2014
123
34
101
Since they are tied to the chips and not sold as retail, I don't see why both can't co-exist. Also, they have different implementations and will be offered in different densities.

It's totally different than the media examples you guys posted. Not everything is "versus" stuff...
 

III-V

Senior member
Oct 12, 2014
678
1
41
Lots of talk about this for years. When will we see it in actual products?

AMD Zen, Intel Cannonlake, ...?
Over the course of the next year. 14 nm Xeon Phi uses it, Nvidia's Pascal uses it, AMD's Pirate Islands use it...
 
Aug 11, 2008
10,451
642
126
Over the course of the next year. 14 nm Xeon Phi uses it, Nvidia's Pascal uses it, AMD's Pirate Islands use it...

I dont understand why this is showin up first on dgpus. Dont they have plenty of bandwidth already? It would seem like APUs are where this is really needed.
 

III-V

Senior member
Oct 12, 2014
678
1
41
I dont understand why this is showin up first on dgpus. Dont they have plenty of bandwidth already? It would seem like APUs are where this is really needed.
APUs do need it, but it's too expensive. dGPUs can make use of it too -- the amount of ROPs a GPU can utilize efficiently is bounded by memory bandwidth.
I imagine it's going to be pricy and (relatively) power hungry.
Power should actually be pretty close to what DDR3 uses.
 

DownTheSky

Senior member
Apr 7, 2013
800
167
116
HBM is the replacement of GDDR5 and will be used in mobile segment also. HMC will come after DDR4.