• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Nvidia's Pascal GP104 GPU may opt for GDDR5 over HBM *Updated GDDR5X JDEC Specs

cm123

Senior member
Nvidia's Pascal GP104 GPU may opt for GDDR5 over HBM

http://www.techspot.com/news/63436-nvidia-pascal-gp104-gpu-gddr5-hbm.html

AMD seems to be betting the farm on HBM while seems Nvidia is not so excited to make the change - AMD always has a plate full, 2016 though to catch up to Intel and hold ground with Nvidia is going to be interesting in my opinion (even though I'm Nvidia guy, I hope they do).

**** Update Info ****

GDDR5X is your standard GDDR5 memory however, opposed to delivering 32 byte/access to the memory cells, this is doubled up towards 64 byte/access. And that in theory could double up graphics card memory bandwith. Early indications according to the presentation show numbers with the memory capable of doing up-to 10 to 12 Gbps, and in the future 16 Gbps. So your high-end graphics cardsthese days hover at say 400 GB/s. With GDDR5X that could increase to 800~1000 GB/sec and thus these are very significant improvements, actually they are competitive enough with HBM.

Jedec:

Derived from the widely adopted GDDR5 SGRAM JEDEC standard, GDDR5X specifies key elements related to the design and operability of memory chips for applications requiring very high memory bandwidth. With the intent to address the needs of high-performance applications demanding ever higher data rates, GDDR5X is targeting data rates of 10 to 14 Gb/s, a 2X increase over GDDR5. In order to allow a smooth transition from GDDR5, GDDR5X utilizes the same, proven pseudo open drain (POD) signaling as GDDR5.

“GDDR5X represents a significant leap forward for high end GPU design,” said Mian Quddus, JEDEC Board of Directors Chairman. “Its performance improvements over the prior standard will help enable the next generation of graphics and other high-performance applications.”


http://www.guru3d.com/news-story/jedec-announces-publication-of-gddr5x-graphics-memory-standard.html
 
Last edited:
I dont think so, otherwise they would need 512bit bus or something, and an even better memory compression algorithm to compete. Otherwise they wont be able to feed the GPU.
 
I dont think so, otherwise they would need 512bit bus or something, and an even better memory compression algorithm to compete. Otherwise they wont be able to feed the GPU.

Nah, GM204 only has a 256-bit bus. GP104 could probably get away with GDDR5X on a 384-bit bus, even if it has double the performance.
 
Is GP104 a high end part though? The polaris part amd showed was compared to a gtx950. HBM is overkill for those parts.

That's good point - GP100 is higher part over GP104 - Factor in as pointed out above GDDR5X and the middle to lower end cards still should have a nice bump in performance.

Suppose production volumes of HBM could also be a problem if all parts used HBM, at least currently.
 
Im pretty sure Nvidia has confirmed higher end parts will have up to 16 or 32GB of HBM2 memory. The lower end stuff will still come in GDDR5.

http://wccftech.com/nvidia-pascal-hbm2-16gb-1-tbs-2016/

This article indicates up to 16GB with a potential to use 32GB on the refresh.

Just because HBM2 will support 8GB stacks doesn't mean they will be used. There is still a huge cost issue with putting 2-4x more memory than any game needs.
 
Just because HBM2 will support 8GB stacks doesn't mean they will be used. There is still a huge cost issue with putting 2-4x more memory than any game needs.

But it does mean you can build a workstation version of a HBM card.
 
GP104 is mid-range hence not too surprising it will not have HBM especially since NV cards need less bandwidth than AMD. hopefully this is also reflected in the price and not NVs margins.
 
Nvidia might have gone with GDDR5X for GP104 and might be using HBM2 for GP100 which will likely release much later in 2017 as yields improve. I don't expect any 400+ sq mm FINFET GPU in 2016 as yields are still very difficult for die sizes above 200+ sq mm. The best case would be 300-350 sq mm for GP104 and flagship Polaris.
 
Just because HBM2 will support 8GB stacks doesn't mean they will be used. There is still a huge cost issue with putting 2-4x more memory than any game needs.

Of course, that is why I said up to 16GB. I'd expect consumer versions to have 4-8GB and the Quadro\Tesla versions to fill out to 16 and eventually 32GB.
 
Nah, GM204 only has a 256-bit bus. GP104 could probably get away with GDDR5X on a 384-bit bus, even if it has double the performance.

256-bit bus with 12ghz GDDR5X = 384 gb/s, a 70% increase in bandwidth over GM204. 256-bit bus would be just fine.

A 70% increase in performance (guessing) along with a 70% increase in bandwidth? Interestingly, that would make it only about 7-12% faster than the Gigabyte water-cooled GTX 980 TI in 1440p and 4k. Hopefully there is plenty of headroom in Pascal or people with overclocked 980 TI's will be laughing at how small the performance gain would be for the next "high end" sku from nvidia.
 
256-bit bus with 12ghz GDDR5X = 384 gb/s, a 70% increase in bandwidth over GM204. 256-bit bus would be just fine.

A 70% increase in performance (guessing) along with a 70% increase in bandwidth? Interestingly, that would make it only about 7-12% faster than the Gigabyte water-cooled GTX 980 TI in 1440p and 4k. Hopefully there is plenty of headroom in Pascal or people with overclocked 980 TI's will be laughing at how small the performance gain would be for the next "high end" sku from nvidia.

People dumped 780/ti for 970/980. Except for the power savings there isn't much improvement. Except for the lack of optimizations for newer games. Hopefully they don't do that again to current card owners.
 
People dumped 780/ti for 970/980. Except for the power savings there isn't much improvement. Except for the lack of optimizations for newer games. Hopefully they don't do that again to current card owners.

If mid-range Pascal is 5-10% faster than current top-GPU, that's a pretty good result, because we're expecting massive improvements in perf/w, that would give you OC 980Ti performance at ~125W, nothing to scoff at.
 
If mid-range Pascal is 5-10% faster than current top-GPU, that's a pretty good result, because we're expecting massive improvements in perf/w, that would give you OC 980Ti performance at ~125W, nothing to scoff at.

If you had a 980 ti would you pay $550 just to use less power with no visual improvement?
 
If you had a 980 ti would you pay $550 just to use less power with no visual improvement?

No, but lots would. Same thing happened with the 970 and 980, sold extremely well, even beyond NV's expectations.

What were they, 5-10% above 780/Ti at launch?

The perf/w leap on Pascal and Polaris will be extreme compared to the above.

With GameWorks, NV can add features that are optimized to run much faster on Pascal, and in those games, the gap will be much higher. That would certainly give an incentive for gamers to upgrade. It happened already and was very well received. If it works, keep doing it.

Think about that scenario then ask yourself, why do NV or AMD need to invest to produce a big-die gaming chip on a expensive node that so far all signs point towards issues with big chips? They wouldn't unless they were pushed to do so by competition.

That's Intel in the HPC space and hopefully AMD/NV will continue to push each other in the consumer area so we gamers can benefit.
 
Back
Top