Micron offers 2X GDDR5 Speed in 2016

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Heck with GDDR5X speeds initially getting up to 12 gbps, Nvidia could ship a Gen 1 GP100 conifgured with a 384-bit bus and 576 gb/s bandwidth just to get it out the door asap. I've said it before and I'll say it again, I don't care how big the die is, how wide the memory bus, or what kind of ram it uses. All I care about are the various performance metrics that matter to me.

I doubt I'll be upgrading to Pascal anways. Probably hold out till 2017 / Volta / GCN 3.0. I want 2.25x GTX 980 speed in a 200 watt or less power envelope for $500 or less. I've been playing so few games in the past few months with multiple kids running around and my wife working just as much as me I can live happily off my current steam backlog for 2 years, negating the need to upgrade.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I've said it before and I'll say it again, I don't care how big the die is, how wide the memory bus, or what kind of ram it uses. All I care about are the various performance metrics that matter to me.

But you do know that these metrics are often intertwined with the die size and memory bandwidth? That's what determines the maximum amount of shaders/TMUs/ROPs/memory controller size taken up by valuable transistors/geometry units/L2 cache, etc. While it is true that architectures do not necessarily scale linearly in performance with larger die sizes and higher memory bandwidth, generally speaking based on 780Ti vs. 680 or 980Ti vs. 980, we know that larger die chips smoke smaller die chips of the same architecture even though they may have worse perf/mm2, perf/watt and perf/transistor. At the end of the day all 3 of these metrics are less important than price/performance and absolute performance.

Some examples @ 1440P:

- 780 smashes 290 in perf/watt but as a product 780 was a failure because it cost $500-650 when 290 cost $400. Today 780 even loses to the 290.
- 960 beats 780Ti in perf/watt but 780Ti is a good card, 960 is crap.
- 980 beats 970 and 980Ti in perf/watt but it was a worse buy than either of these for most of 2015.
- Fury destroys R9 290X/390 in perf/watt but as a product, the nearly $500 Fury is a failure compared to a $280 390.

Or this one - 750Ti is 2nd best perf/watt card but is it actually a good gaming card? Nope.
https://www.techpowerup.com/reviews/ASUS/R9_380X_Strix/24.html

IMO, price/performance and absolute performance are still the most important metrics for getting a good desktop gaming card, even though marketing perf/watt is winning. The reason 980Ti dominated Fury X this gen is exactly because of that, same for GTX280/285/480/580/780Ti.

I doubt I'll be upgrading to Pascal anways. Probably hold out till 2017 / Volta / GCN 3.0. I want 2.25x GTX 980 speed in a 200 watt or less power envelope for $500 or less.

I think consumer Volta is launching only in 2018. It makes sense that Pascal is a 2 year cadence much like NV's previous architectures. I know there has been some talk about NV moving Volta to 2017 but I don't believe it. Plus, NV's latest roadmap has Volta for 2018.

Since you want 2.25X the increase over 980 in 200W of less, I don't see any next gen cards hitting that until 2018 and beyond. That's even beyond the capabilities of a full node shrink + new architecture. Not even 560Ti -> 680 was this good, and you always want that for $500 or less. I think 2018 is more realistic for that but as you said if you are busy and have a backlog of games, no need to upgrade to play older/less demanding 2012-2015 games if your card satisfies you.
 
Last edited:

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
I've said it before and I'll say it again, I don't care how big the die is, how wide the memory bus, or what kind of ram it uses, or what kind of ram it uses. All I care about are the various performance metrics that matter to me.

Literally you just a few days ago.

Fury X is only 13% faster than a GTX 980 at 1080p and 22% faster at 1440p. It's taking Fiji 50% more die space and 71% more transistors to only squeak out 13% and 22% vs. a GTX 980. Surely (not 100% sure, but confident) Pascal will feature a better perf/transistor than Maxwell, making Fiji all the more out of place even with a die shrink.

So which version are you now ascribing to? You seem to be changing your "principles" based on which GPUs win.

And by the way, we all know AMD does worse at lower resolutions for architectual reasons, so mixing in 1080p is misleading,
And who would buy a Fury X to play at 1080p anyway? So even taking in the 1080p into account is yet another indication of someone with an agenda.

BTW the Fury X is doing 23.1% better than the 980 at 1440p.

perfrel_2560_1440.png


Oh, the 980 Ti only does 24.5% better than the 980 at 1440p also!

Yet are we led to believe that those 1.4%(24.5% - 23.1%) are a huge difference? (Overclocking is negated here because both the 980 and the 980 Ti can both overclock about the same so the difference between them is the same)

I really dislike when people purposely take a dishonest/misleading statistic to narrowly fit an agenda.
 

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
Anyone who says the extra bandwidth is an immediate game-changer should just look at how the Fury X didn't stomp all over the 980 Ti, they're fairly equal with a slight edge to the 980 Ti. HBM makes a lot of sense for lower power, but can it pay that back with its higher cost?

Let's forget pure performance for a while and look at the areas where memory bandwidth is the most pressing need. There was no bottleneck associated with the continuation of NV's GDDR5.

Piroko is the one who's closest to the best reason for the introduction of GDDR5X to happen.

Piroko said:
Higher bandwith per chip = less chips needed = cheaper to produce.

Also, from NV's perspective, who already have more efficient memory bandwidth compression technology than AMD, continuing with GDDR5X over HBM for most of their GPU lineup next year with the exception of their very high-end/flagship GPU makes even more sense.

I wouldn't be surprised if we will see a re-run of 2015 next year, if AMD decided to embrace HBM2 for more than just the top-line flagship and try to push it into the mid-high segment as well, resulting in negligible performance increases over GDDR5X and no real bottlenecks would be solved because there weren't any to begin with at the current state of what most people play(and at the resolutions they're playing at).

Yes, of course, over the long run, HBM will inevitably be the standard. But people overestimate the current needs today.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
HBM makes sense for lower power, but anyone who says the extra bandwidth is an immediate game-changer should just look at how the Fury X didn't stomp all over the 980 Ti, they're fairly equal with a slight edge to the 980 Ti.

Let's forget pure performance for a while and look at the areas where memory bandwidth is the most pressing need. There was no bottleneck associated with the continuation of NV's GDDR5.

Piroko is the one who's closest to the best reason for the introduction of GDDR5X to happen.



Also, with from NV's perspective, who already have more efficient memory bandwidth compression technology than AMD, continuing with GDDR5X over HBM for most of their GPU lineup except the very, very high-end makes even more sense.

I wouldn't be surprised if we saw a re-run of 2015 next year, if AMD decided to embrace HBM2 for more than just the top-line flagship and NV showed that it didn't make much difference against GDDR5X in the mid-high space.

So, we should wait until we are VRAM bound before we introduce HBM? Remember nVidia hung on to GDDR3 until Fermi and their first try try at GDDR5, node shrink, and a new uarch didn't go so well. Fermi's memory controller is famously bad. Meanwhile AMD switched to GDDR5 a generation earlier on their flagship part and then transitioned smoothly into it for their whole lineup. I'm watching to see if nVidia fare any better transitioning to HBM.

Since they are trying an alternative to HBM next gen (GDDR5X), I'd say nVidia knows we need more bandwidth too. AMD is just ahead of the curve and driving the market.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Literally you just a few days ago.



So which version are you now ascribing to? You seem to be changing your "principles" based on which GPUs win.

And by the way, we all know AMD does worse at lower resolutions for architectual reasons, so mixing in 1080p is misleading,
And who would buy a Fury X to play at 1080p anyway? So even taking in the 1080p into account is yet another indication of someone with an agenda.

The context of that conversation was about shrinking Fiji and using it as a mid-range 1080p card. Everything I said in that regard is 100% valid. And if you actually read my post, you'll notice that I talked about 1440p resolution as well. Context is everything my good friend.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Since they are trying an alternative to HBM next gen (GDDR5X), I'd say nVidia knows we need more bandwidth too. AMD is just ahead of the curve and driving the market.

AMD could not have possibly stuck with GDDR5 for Fiji. Fiji would have been a complete disaster otherwise. They're "ahead of the curve" because it's out of absolute necessity. Hawaii needs 65% more gb/s bandwidth to stay competitive with GTX 980 while also drawing anywhere from 75-100% more power (depending on the card and review). 380x beats 960 by 30% but it is taking 70% more power draw and 60% more bandwidth (Tonga is also 60% larger in die size). Fiji needs 33% more bandwidth and is some 20% less efficient than Titan X (except on price, of course comparison was only made to illustrate full chip vs. full chip). Fury X, even with a water cooler and power-saving HBM memory, is still less efficient than any GM200 iteration and doesn't have even half the head room of any GM200-based card. If Fiji were to draw 20 more watts switching to GDDR5 and have a 10-15% performance drop on top of that, it would have been an unstoppable train wreck.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Anyone who says the extra bandwidth is an immediate game-changer should just look at how the Fury X didn't stomp all over the 980 Ti, they're fairly equal with a slight edge to the 980 Ti. HBM makes a lot of sense for lower power, but can it pay that back with its higher cost?

Fiji cant use all of the theoretical bandwidth:
b3d-bandwidth.gif

http://techreport.com/review/28513/amd-radeon-r9-fury-x-graphics-card-reviewed/4

Only 2/3 is usable and nVidia's color compression reduces the gap between both cards.
 

el etro

Golden Member
Jul 21, 2013
1,584
14
81
HBM tech is not viable to put in mid-end because prices of stacking is still high. Prices will get cheper once the tech gets older, and then GDDR will be done.