Samsung GDDR6 @ CES

EXCellR8

Diamond Member
Sep 1, 2010
3,982
839
136
Samsung GDDR6 K4ZAF325BM-HC14 is 16 Gb module operating at 1.35V. This is Samsung’s first GDDR6 module. The chip pictured below is an engineering sample provided for the sole purpose of this award. Officially this module is still ‘under development’, but that didn’t stop CES board from giving it an award.

The CES takes place next year in January. NVIDIA already announced a keynote for January 7th. It is either there or in March (GPU Technology Conference) where Jensen will unveil Volta-based GeForce models. The GDDR6 is likely to make an appearance with GeForce 2000 series.

Source

Get yer wallets primed next spring
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Samsung says its GDDR6 RAM is already hitting 16 GT/s (presumably per pin), and presuming GDDR6 maintains the 32-bit-wide data interface per chip of GDDR5, that squares with Samsung's claim of 64 GB/s transfer rates per chip.

For perspective, if Samsung is already producing 16 GT/s GDDR6, that means its chips are performing quite a bit better than expected at this stage. Back at Hot Chips, the company said to expect 14 GT/s from its GDDR6 memory chips when it begins mass production of the new graphics RAM next year. GPUs using the new RAM could enjoy an easy doubling of bandwidth compared to today's 8 GT/s GDDR5 for the same capacity

http://techreport.com/news/32821/samsung-teases-higher-than-expected-speeds-from-its-gddr6-ram

HBM needs a couple of more years in the oven.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
I dont understand why various sites today thought they found amazing secret news buried in Samsung's recent article when it was reported over a year ago that Samsung would offer 14 and 16gbps GDDR6

https://www.theinquirer.net/inquirer/news/2469037/samsung-gddr6-memory-to-arrive-on-gpus-in-2018

Nvidia will continue to offer traditional graphic cards with GDDR6 spread around on a board for tge future. Thats not going to change. HBM preachers thought it was the golden standard but the difference between GDDR6 and HBM is 100% moot when you got 384bit cards with 750GB/s bandwidth from GDDR6.
 

EXCellR8

Diamond Member
Sep 1, 2010
3,982
839
136
I dont understand why various sites today thought they found amazing secret news buried in Samsung's recent article when it was reported over a year ago that Samsung would offer 14 and 16gbps GDDR6

that was pre-development iirc... they actually have the modules now, tested.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
I sure wonder how much GDDR6 will end up costing over GDDR5 & GDDR5X.
Hynix already announced GDDR6 back in may (https://www.anandtech.com/show/1139...am-gddr6-added-to-catalogue-gddr5-gets-faster) so, Sammy has been slow to adopt it.
Meh, dirt cheap like GDDR5.
The memory on R9 290X, 16 in total, had a total cost of $30.
You see from this example to why Nvidia goes with GDDR6 unless its a premium GPU.
That said, they make a lot of money per GPU sold (this example doesnt include the card itself though)


Sys-plus-1.jpg
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,770
3,144
136
Thats not going to change. HBM preachers thought it was the golden standard but the difference between GDDR6 and HBM is 100% moot when you got 384bit cards with 750GB/s bandwidth from GDDR6.

I hate stupid logic like this,
1. HBM is available and shipping in products GDDR6 isn't.
2. HBM is GDDR, any improvements in the DRAM architecture , protocol, keying encoding, can be applied to both in future iterations
3. HBM uses less power per GB of bandwidth
4. There will be future versions of HBM just like there is about to be a future version of GDDR (shock horror i know!)

Consider that foundry 7nm is a big jump in both transistor density and performance a GP104 equivalent (300mm sq) could easily be 80-100% perf improvement. Getting all excited about a 45% memory bandwidth uplift seems to be misplaced. As a result as you said you need to go to 384bit and now you are spending more power budget on memory relative to the previous gen just to keep the thing fed.........

Compare it to a hypothetical Navi with 80-100% perf over Vega With HBM you can reduce the stack size increase the number of stacks ( 4x 2hi) and you have doubled your memory performance (1024GB/s) for a given memory size and has almost the same power cost ( you are powering the same number of dram modules) you just have the the extra power of the buffer die and physical transmit. The cost increase of a slightly bigger interposer is going to be less then the extra memory/PCB/substrate costs of needing to go from 256 to 384 ( obviously absolute HBM packaging/memory costs are higher).

For some reason im not seeing the awesome sauce here,.............
 

casiofx

Senior member
Mar 24, 2015
369
36
61
I want to have Volta with 16GBs of VRAM. Combine that with 8700k oced, blender would be running awesome
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
I hate stupid logic like this,
1. HBM is available and shipping in products GDDR6 isn't.
2. HBM is GDDR, any improvements in the DRAM architecture , protocol, keying encoding, can be applied to both in future iterations
3. HBM uses less power per GB of bandwidth
4. There will be future versions of HBM just like there is about to be a future version of GDDR (shock horror i know!)

Consider that foundry 7nm is a big jump in both transistor density and performance a GP104 equivalent (300mm sq) could easily be 80-100% perf improvement. Getting all excited about a 45% memory bandwidth uplift seems to be misplaced. As a result as you said you need to go to 384bit and now you are spending more power budget on memory relative to the previous gen just to keep the thing fed.........

Compare it to a hypothetical Navi with 80-100% perf over Vega With HBM you can reduce the stack size increase the number of stacks ( 4x 2hi) and you have doubled your memory performance (1024GB/s) for a given memory size and has almost the same power cost ( you are powering the same number of dram modules) you just have the the extra power of the buffer die and physical transmit. The cost increase of a slightly bigger interposer is going to be less then the extra memory/PCB/substrate costs of needing to go from 256 to 384 ( obviously absolute HBM packaging/memory costs are higher).

For some reason im not seeing the awesome sauce here,.............
The main problem of HBM is 2.5D integration not being cheap or easy or whatever.
Si interposers are ass.
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
I hate stupid logic like this,
1. HBM is available and shipping in products GDDR6 isn't.
2. HBM is GDDR, any improvements in the DRAM architecture , protocol, keying encoding, can be applied to both in future iterations
3. HBM uses less power per GB of bandwidth
4. There will be future versions of HBM just like there is about to be a future version of GDDR (shock horror i know!)

Consider that foundry 7nm is a big jump in both transistor density and performance a GP104 equivalent (300mm sq) could easily be 80-100% perf improvement. Getting all excited about a 45% memory bandwidth uplift seems to be misplaced. As a result as you said you need to go to 384bit and now you are spending more power budget on memory relative to the previous gen just to keep the thing fed.........

Compare it to a hypothetical Navi with 80-100% perf over Vega With HBM you can reduce the stack size increase the number of stacks ( 4x 2hi) and you have doubled your memory performance (1024GB/s) for a given memory size and has almost the same power cost ( you are powering the same number of dram modules) you just have the the extra power of the buffer die and physical transmit. The cost increase of a slightly bigger interposer is going to be less then the extra memory/PCB/substrate costs of needing to go from 256 to 384 ( obviously absolute HBM packaging/memory costs are higher).

For some reason im not seeing the awesome sauce here,.............

Nvidia also uses HBM2 for its graphics cards! You would think they would have stuck to GDDR5X if it was the fail it is meant to be.
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
How does one hate a memory standard?

I dunno - I remember years ago loads of people started hating GDDR5 since ATI was the first to use it and were saying GDDR3 was better,and it all suddenly went away when Nvidia started using it. In this case Nvidia also does use HBM2,and this was before AMD started using it. Even Intel has jumped on using it for their new CPU/discrete GPU mishmash.

So,I am uncertain why people on forums seem to be determined to denote HBM2 as a failure,when the three main companies we buy parts for our computers from all have products with it. I mean by extension GDDR5X,has seem far more limited usage,and now it seems to have been bypassed by GDDR6 in quick succession,so I do wonder if it was down to cost too.
 
  • Like
Reactions: Krteq

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
So,I am uncertain why people on forums seem to be determined to denote HBM2 as a failure
You have to be completely braindamaged to say that.
Out of all the TSV memories only HBM survived and became popular and is seeing industry-wide adoption.
Kudos to Macri and his team.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
I hate stupid logic like this,
1. HBM is available and shipping in products GDDR6 isn't.
2. HBM is GDDR, any improvements in the DRAM architecture , protocol, keying encoding, can be applied to both in future iterations
3. HBM uses less power per GB of bandwidth
4. There will be future versions of HBM just like there is about to be a future version of GDDR (shock horror i know!)

Consider that foundry 7nm is a big jump in both transistor density and performance a GP104 equivalent (300mm sq) could easily be 80-100% perf improvement. Getting all excited about a 45% memory bandwidth uplift seems to be misplaced. As a result as you said you need to go to 384bit and now you are spending more power budget on memory relative to the previous gen just to keep the thing fed.........

Compare it to a hypothetical Navi with 80-100% perf over Vega With HBM you can reduce the stack size increase the number of stacks ( 4x 2hi) and you have doubled your memory performance (1024GB/s) for a given memory size and has almost the same power cost ( you are powering the same number of dram modules) you just have the the extra power of the buffer die and physical transmit. The cost increase of a slightly bigger interposer is going to be less then the extra memory/PCB/substrate costs of needing to go from 256 to 384 ( obviously absolute HBM packaging/memory costs are higher).

For some reason im not seeing the awesome sauce here,.............

HBM2, which is what Nvidia use for their premium products consume MORE power than GDDR5....
GDDR6 consume less power than GDDR5...

Bandwidth of GDDR5X is enough for highest end of gaming GPUs.
GDDR6 offer more bandwidth than GDDR5X. Do the math.

GDDR6 doesnt require special packaging like HBM2. It involves the exact same design of PCB we have been using for many many years now.

Tiny gaming cards exist already today with GDDR5. Just like Nano with HBM. PCB may be smaller with HBM, but when they can make so tiny cards with GDDR5, you lose the whole «HBM takes smaller space» argument.

GDDR6 is cheaper and will be readily available to cover Nvidia`s need due to the nature of the product that its scaled up in huge quantities due to being used across many types of products.

Logic may be «stupid» when it doesnt fit your view, but the writing is all over the wall:
GDDR6 will be heavily adopted by Nvidia because of the reasons above. Just like Nvidia chose to use GDDR5X and GDDR5 for Pascal.

And «HBM is available now, GDDR6 isnt» argument is a pretty weak one.
We are talking about Volta and future cards. Not cards that exist today.

HBM sounded so promising with when it arrived, but with GDDR5X and now GDDR6, its pretty MEH.
 
Last edited:
  • Like
Reactions: ozzy702

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
You have to be completely braindamaged to say that.
Out of all the TSV memories only HBM survived and became popular and is seeing industry-wide adoption.
Kudos to Macri and his team.

Look on many forums dude,people are hating on HBM2 for some reason. Its quite obvious why AMD uses it,since it means they can save power and also have a smaller physical chip,and is there to mitigate certain issues with the actual design of their high end GPUs. If they stuck with GDDR5 or GDDR5X they would need to integrate a larger memory controller in the physical GPU and it would draw more power. By using HBM2,more of the physical board power and TDP can be used for the GPU,and it cuts down the physical size of their chips. It even simplifies the design of their PCBs to a degree too.

I mean it sucks for them SK Hynix delivery schedules went a bit pear shaped,and they even had to second source from Samsung,but I expect once more capacity appears cost should go down.
 
Last edited:

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Well that is more than questionable, dude.
The cartel will milky-milky nVidia to death.
The chips will be so widely adopted compared to HBM, that the cost is guaranteed to go down very fast.

In addition, you can be sure that Nvidia have carved out a pretty sweet deal with Micron when they have dedicated a production line just for their graphic cards. Micron is pretty much guaranteed to sell a crapload of chips, while Nvidia can use that to negotiate better prices just like Apple does for the parts that goes in their phones and tablets
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
GDDR6 will be heavily adopted by Nvidia because of the reasons above.
There's one reason: sweet sweet margins.
nVidia's motto is milking the userbase to death.
The chips will be so widely adopted compared to HBM
HBM is already widely adopted.
that the cost is guaranteed to go down very fast
That's questionable. The cartel will milk nVidia as long as the demand is there.
sweet deal with Micron when they have dedicated a production line just for their graphic cards.
I don't remember Micron saying anything about near future availability of 16Gbps chips. Only 14Gbps ones.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
HBM is already widely adopted.

That's questionable. The cartel will milk nVidia as long as the demand is there.

I don't remember Micron saying anything about near future availability of 16Gbps chips. Only 14Gbps ones.

LOL HBM is not widely adopted. Its a niche product and not even a fraction of what GDDR5 is.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
It's used by nVidia, Intel, AMD, NEC and Google.
It is widely adopted, come on.
Denial won't change that.

Please list the products that use HBM. Like I said, its a niche product only used by a few AMD products and some premium products by Nvidia.

Widely adopted term can be loosely debated.
Its not even close to the GDDR chips manufactured and used across so many products