[TT] Pascal rumored to use GDDR5X..

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

railven

Diamond Member
Mar 25, 2010
6,604
561
126
You're speculating on future hardware. Why they do and why they may not in the future matters.

Seems to be all that ATF does.



Sometimes I wonder if I'm reading the same thread as other people. It went from a possible interim memory technology IF HBM1/HBM2 is having the alleged issues as of late to defending HBM1 against this new tech.

Am I the only one that thinks if the HBM issues are true, both AMD and Nvidia might use this new memory as a crutch?
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Even at 16Gbps @ 384-bit bus, that's 768GB/sec, well short of 1TB/sec of HBM2 and that's just the 1st revision of HBM2.

Not sure how you missed me talking about HBM1.

But ok, unlike HBM2, GDDR5X is a real product today with 8Gbit chips. Who knows if we even see HBM2 in 2016 with the rumoured delay. Also HBM2 may start out at 1.6Ghz, or 800GB/sec in your calculation.

18f593cf-ca59-4050-8e38-d28fe5d993f3.jpg
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Not sure how you missed me talking about HBM1.

But ok, unlike HBM2, GDDR5X is a real product today with 8Gbit chips. Who knows if we even see HBM2 in 2016 with the rumoured delay. Also HBM2 may start out at 1.6Ghz, or 800GB/sec in your calculation.

<snip>

Reading his post and this thread, I honestly don't know where he picked his strawmans from.

No one in this thread said any of the things he's arguing against.
 

jpiniero

Lifer
Oct 1, 2010
16,494
6,994
136
If Big Daddy GP100, the true flagship Pascal, has HBM2 then that in itself would be 100% confirmation from NV themselves that GDDR5X is technologically inferior in all key aspects other than price

My thinking orginally was that they were going to release GP100 (and only GP100) with Pascal/GDDR5X ASAP so they can refresh Tesla... and only do the full lineup once HBM2 becomes available. But if they are talking about doing the entire lineup, it really does sound like HBM2 is delayed into 2017.

I'm also assuming Arctic Islands is using HBM2 only... maybe it's backwards compatible with HBM1? No idea. AMD could be screwed if they have to wait until HBM2 is available.
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
AtenRA has been, by far, the most vocal AMD troll/shill on this site for quite some time. Everyone is best served not feeding the troll.

And interestingly, all the white knighting in this thread has been made by some Nvidia fanboy (or should we say, anti-AMD fanboy?).

GDDR5X being used is a sign that HBM2 won't be ready by the time Pascal chips are. Also, no one here has bothered to speculate on what the power consumption of GDDR5X would be of it ran at a slower 8-9ghz in the lower end desktop and mobile variants.

HBM2 will be ready for next gen high end gpus. HBM is already ready for 4GB-or-less cards which is actually +80% of the market. The problem is margins for Nvidia. Rather than pushing the envelope regarding form factor, performance and efficiency, milking their loyal fanbase with yet another round of GDDR5 (the X is just marketing gimmickery, the underlying tech and implementation is almost the same) seems the wiser solution for their shareholder's pockets. People defending the stagnation of tech with this move just proves that Nvidia is right with their thinking described above, or else we should start thinking that people posting here have underlying intentions regarding the commercial welfare of Nvidia (i.e we have disguised shareholders/shills).

Nvidia competed, and beat, AMD at 28nm in performance and efficiency without HBM. I'm sure they will do just fine with Pascal 1.0.

Nvidia has gathered the low hanging fruit with Kepler and Maxwell regarding their software scheduler solution for better efficiency. Now they have to cover up the ground lost in compute efficiency and performance. And considering they will get good efficiency gains from the new node, it wont be surprising for a more GCN-like uarch with Pascal. And by this I mean, a better all around, but worse at gaming efficiency one. AMD on the other ground has everything to gain from the power efficiency side: their uarch is already robust in all fronts, they just need to apply better power gating and clock regulation techniques (some seen in bdver3 and 4) and they will gain most of the lost ground in the efficiency competition. Nvidia will sometime in the future rethink if going for a software scheduler solution with the new API landscape will be the best option, and we know how their last hardware scheduler uarch was a power hog (Fermi v1/v2).

Also, its laughing material the people defending GDDR5X as a commonplace technology and painting HBM/2 as vaporware. How many GDDR5X GPUs can I buy today? oh wait.
 

stahlhart

Super Moderator Graphics Cards
Dec 21, 2010
4,273
77
91
And interestingly, all the white knighting in this thread has been made by some Nvidia fanboy (or should we say, anti-AMD fanboy?)

Shut up with the trolling and get back on topic, or you're going to get shut up.
-- stahlhart
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Didn't see this one coming although its probably not surprising given the high cost/low maturity of HBM technology so it makes sense to develop an alternative technology (short term ~5 years) while HBM gets matured.

The cost and yield levels are probably the biggest drawbacks of HBM atm so its nice to have a cheaper alternative (also instead of increasing the memory bus) before fully making the jump to such expensive technology.
 
Feb 19, 2009
10,457
10
76
HMB2 delays, it may not be the chips themselves but the stacking and tsv process, there was a recent article from the only guys doing it that they only recently managed to get volume.

I doubt they can keep up with AMD and NV combined demands if HBM2 expand to mid-range products.

Teslas and $1K consumer SKUs would no doubt have it, low latency and very high bandwidth at less power usage is too big an advantage to give up.

For AMD, they would be impacted the same, so I doubt we will see mid-range SKUs with HBM2 from Artic Island.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Why design cards that will be obsolete next year when we can make them obsolete on release. Guarantees upgrade when real SOTA cards come out.

To each his own, but I actually like new tech. GDDR5X is definitely a step back from HBM, which we already have. I prefer we move forward if possible. I'm concerned that might be the issue. It might not be possible yet to move forward with HBM.

You'd prefer? What difference does it truly make if it puts framerates on your puter? You don't truly care about the memory technology being used, because that would make no sense. What would make sense is for a gamer like to you prefer high framerates. And if those framerates are delivered via HBM or GDDR5X it should make no difference to you.
 

iiiankiii

Senior member
Apr 4, 2008
759
47
91
You'd prefer? What difference does it truly make if it puts framerates on your puter? You don't truly care about the memory technology being used, because that would make no sense. What would make sense is for a gamer like to you prefer high framerates. And if those framerates are delivered via HBM or GDDR5X it should make no difference to you.

Yeah. Who cares. Performance is performance. However, HBM allows for smaller form factors and lower power consumption compared to gddr5. Those are all positive things that we should strive towards.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
HMB2 delays, it may not be the chips themselves but the stacking and tsv process, there was a recent article from the only guys doing it that they only recently managed to get volume.

I doubt they can keep up with AMD and NV combined demands if HBM2 expand to mid-range products.

Teslas and $1K consumer SKUs would no doubt have it, low latency and very high bandwidth at less power usage is too big an advantage to give up.

For AMD, they would be impacted the same, so I doubt we will see mid-range SKUs with HBM2 from Artic Island.

I think your point is quite sensible. No need to hamstring the mid/high range SKUs with HBM1/2 especially when the power savings (one of the primary benefits for going with HBM in the first place) are going to be had with the new process node.

Btw whats the maximum memory capacity for HBM2? This could also play a factor seeing as large VRAM (>16GB) is always a good thing in terms of GPGPU applications.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Yeah. Who cares. Performance is performance. However, HBM allows for smaller form factors and lower power consumption compared to gddr5. Those are all positive things that we should strive towards.

Absolutely. I agree. There is a smaller form factor 970 that uses GDDR5. So SFF are not HBM exclusive AFAIK.
 

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
You'd prefer? What difference does it truly make if it puts framerates on your puter? You don't truly care about the memory technology being used, because that would make no sense. What would make sense is for a gamer like to you prefer high framerates. And if those framerates are delivered via HBM or GDDR5X it should make no difference to you.

Yeah. Things like memory type, power consuption, and heat really don't matter so long as you're getting great performance.
 

iiiankiii

Senior member
Apr 4, 2008
759
47
91
Absolutely. I agree. There is a smaller form factor 970 that uses GDDR5. So SFF are not HBM exclusive AFAIK.

Yeah. Things like memory type, power consuption, and heat really don't matter so long as you're getting great performance.



Yes, SFF is not exclusive to HBM. Notice I said SMALLER form factor. Right now, HBM GPUs' size is mainly restricted by their massive cooler. Hopefully, 16nm GPUs would allow for much lower TDP so the need for massive coolers aren't needed.

I really think the advantage of HBM is more important in mobile and smaller form factor PCs. Considering that HBM consumes roughly ~40% less energy than GDDR5, it'll be more efficient. Less TDP means smaller PCs, less heat, and more efficient GPUs. A more efficient GPUs mean Nvidia/AMD can convert those spared TDPs from HBM to further improve performance (that's basically what happened with FIJI). Those are the benefits.

AMD is in a unique position to create a powerful APU that might be a perfect fit for mobile. Hopefully Zen is a leap forward in IPC and TDP. Imagine a Zen based CPU with a HBM Arctic Island based GPU on the same die. That sounds like a potential powerful combination. But, this is AMD we're talking about. So, one can only be hopefully.

With that said, I believe HBM is less important to high end enthusiasts because power consumption and form factor are less important to us. Most of us have gigantic PCs with massive PSUs to support the over the top overclocking and performance. We're in a different category.

Right now, HBM's bandwidth might not be needed. Maybe the next generation of GPUs (Arctic Island and Pascal) will take advantage of HBM's massive bandwidth.
 
Last edited:

gamervivek

Senior member
Jan 17, 2011
490
53
91
You hit the nail on its head.

Even if Artic Islands is 2x more efficient than the Fury cards, they are still going to compete with Pascal which is 2x more efficient than Maxwell - which in turn is far more efficient than the Fury cards.

So while both are improving rapidly, AMD needs to be at least 3X as efficient with Artic Islands over the Fury GPUs. And without HBM that will be very difficult. But a 4 GB card in 2016 will be a hard sell for enthusiasts. Fury already got a lot of heat for it.

The big question is if all the rumors about AMD getting preferential access to SK Hynix's HBM production actually bears fruit. If it does, even for the high-end, that would be a big selling point for AMD. Then again, a lot of people thought that HBM would be huge for this generation, yet the 980 Ti crushed Fury in everything except 4K benchmarks, which is a niche even for enthusiasts.

What nonsense, 980Ti isn't far more efficient than Fury X and if AMD keep getting games like Battlefront where it's 20% ahead at 4k, if wouldn't even be worse than the competing Maxwell.

Pascal will most likely have to give up some power efficiency if nvidia are going for higher compute usage and async compute.

As for your big question, this rumor itself might be due to nvidia not scoring enough HBM2 so as to start looking elsewhere. And HBM wouldn't have helped with cpu bottleneck, so Fury X doing well at 4k with 64 to 980Ti's 88 ROPs at a lower clockspeed is huge for AMD.

And it's not GM204 cards that are struggling in DX12 benchmarks, it's Fiji cards not scaling well or else they'd be on par or faster than overclocked 980Ti cards.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Three points:

1. The SW:BF beta isn't DX12. It's DX11. The full game (probably) will be DX12.

Yes, imagine the Fiji in DX-12 now that it is faster than Maxwell in DX-11.

2. BF3 and BF4 were NV-tilted games. Especially at lower resolutions. Now DICE seems to have swung the other way. AMD cards have done systematically better than NV cards in that game, the Fury vs 980 Ti are not relevant in this, since it's a top-to-bottom pattern. If you think the VRAM has anything to do with performance in that chart, you're misguided.

No, i was responding to the 4GB HBM vs 6-8GB of GDDR5X. We can clearly see from SW Battlefront that Fury X with only 4GB HBM is 20% faster than GTX980Ti 6GB GDDR-5 at 3820x2160. So even at the very high resolutions, at the current time 4GB HBM looks to be fine. For a mid 14/16nm GPU next year that will be for the 1080/1440p market, 4GB of HBM 1 is more than enough.

3. In the two DX12 benchmarks we've seen, the 980 Ti has been doing as well, and sometimes better, than the Fury. That was true in AotS and in Fable legends. It's really the GM204 cards and below that have struggled most.

For Fable we havent seen any reviews with the new 15.9.1Beta driver, im sure performance will change tremendously in favor of the Fiji cards.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
For Fable we havent seen any reviews with the new 15.9.1Beta driver, im sure performance will change tremendously in favor of the Fiji cards.

It should, contrary to what many want you to believe:
4Ki7.png

FuryX should be more than 17% faster than throttling 290X...
4l3j3b.jpg


It is worth nothing that 980ti is 50% faster in games than 970 and the same apply to fable benchmark.
FuryX should be 50% faster than 290X which means it should score around 38 FPS which is roughly 18% faster than 980ti score.
 
Last edited:

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
FuryX should be more than 17% faster than throttling 290X...

Absolutely. Fury X just doesn't have the lead you would expect there.

It is worth nothing that 980ti is 50% faster in games than 970 and the same apply to fable benchmark.
FuryX should be 50% faster than 290X which means it should score around 38 FPS which is roughly 18% faster than 980ti score.

Buuut I would not expect it to always be 50% faster. A Fury X has 45% more Shaders and 60% more bandwidth, but GPUs virtually never ever scale 1:1 with Shader count and bandwidth. More importantly it has the same ROP count. A 980 Ti has 70% more of all of Shaders, ROPs, and bandwidth compared to the 970 so the improvement will be larger, naturally. AMD needs to improve Fiji's lead over Hawaii and its Shader and Bandwidth IPC since it has more than Maxwell equivalents. But considering Fiji isn't as big as a jump as Big Maxwell you can't expect everything.
 
Last edited:

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Absolutely. Fury X just doesn't have the lead you would expect there.



Buuut I would not expect it to always be 50% faster. A Fury X has 45% more Shaders and 60% more bandwidth, but GPUs virtually never ever scale 1:1 with Shader count and bandwidth. More importantly it has the same ROP count. A 980 Ti has 70% more of all of Shaders, ROPs, and bandwidth compared to the 970 so the improvement will be larger, naturally. AMD needs to improve Fiji's lead over Hawaii and its Shader and Bandwidth IPC since it has more than Maxwell equivalents. But considering Fiji isn't as big as a jump as Big Maxwell you can't expect everything.

Well... I gave you a graph showing average performance across multiple games. Hardware specs aside, the fps scaling is there:
http://www.computerbase.de/2015-08/...karten-von-r7-360-bis-r9-390x-im-vergleich/3/

anyway, it is not the topic of the discussion.
 

Good_fella

Member
Feb 12, 2015
113
0
0
INBF AMD counters with HBM1 un lowest tiers and HBM 2 in higher tiers.

GDDR5X is not good enough against HBM and if nVIDIA is going that path... They won't end well.

Also, there is Intel menace behind them, going to slower things is not a good idea after all.

"NVIDIA rumored to use both HBM2 and GDDR5X, the successor to GDDR5, on their next-gen video cards"

Couldn't agree more. :awe:

Nvidia have a lot to learn from AMD when it comes to mobile GPUs.

It's hard to find any gaming/workstation laptop with AMD GPU. As for mobile HBM1 it's plot for Mission Impossible 6.