Micron offers 2X GDDR5 Speed in 2016

dark zero

Platinum Member
Jun 2, 2015
2,655
140
106
Micron offers 2X GDDR5 Speed in 2016



Micron has promised to launch a new kind of memory that is twice the speed of mainstreams GDDR5 . This will be released during 2016 and will be company's answer to HBM.
With speeds of 10 to 14 Gb/s the memory will outpace the existing 7.0 Gb/s of 4Gb GDDR5 memory chips from Micron. Even the new larger GDDR5 chips with 8Gb density will end up at 8.0 Gb/s data rate. The information has been confirmed by Kristopher Kido who is Director of Micron’s global Graphics Memory Business.
With speeds from 10 to 14 Gb the next generation memory will be much faster and provide much needed bandwidth. Micron can easily call this memory GDDR6 as we have heard people from the graphics industry already using the term.
The new memory will continue to use traditional component form factor, similar to GDDR5, reducing the burden and complexity of design and manufacturing.
Source: http://www.fudzilla.com/news/graphics/39450-micron-to-offer-2x-gddr5-speed-in-2016

---------------
Knowing that AMD will go full HBM1 at least, maybe GDDR6 will appear on nVIDIA mid tier cards.

So...
- Low Tier: GDDR5 (X maybe?) - AMD and nVIDIA

- Mid Tier: HBM1 - AMD and GDDR6 - NVIDIA
- High Tier: HBM2 AMD and nVIDIA
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Where did the GDDR6 nonsense come from?

And wasn't there a thread or 2 on GDDR5X already. GDDR5X spans 10 to 16Gbit.

micron-gddr5x-leak-1.jpg

micron-gddr5x-leak-2.jpg
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
10-12 gbps will be plenty for mid-range dies, at least on Nvidia's architectures. 12 gbps on a 256-bit bus would give a chip 384 gb/s of bandwidth, 70% more than GTX 980.

Knowing that AMD will go full HBM1 at least, maybe GDDR6 will appear on nVIDIA mid tier cards.

So...
- Low Tier: GDDR5 (X maybe?) - AMD and nVIDIA

- Mid Tier: HBM1 - AMD and GDDR6 - NVIDIA
- High Tier: HBM2 AMD and nVIDIA

Neither AMD nor Nvidia will use HBM on anything but their highest end chips on first gen finfets. The extra costs on top of the already expensive node jump will be too prohibitive. AMD cannot afford any further margin pressure, seeing as how they are currently selling bigger chips with more expensive PCB's and sometimes 2x the ram for LESS than the competing products.
 

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
Fudzilla are confused. GDDR6 doesn't exist beside GDDR5X. It's the same thing. It may be a rebrand but it's the same thing essentially.

That being said, it remains highly debatable if mid-range GPUs in 2016 will even need a 2X bandwidth increase. GDDR5 would cover the performance needs of a midrange GPU with the 980 Ti-equivalent of performance just fine. But I guess there's a need to have something new for new's sake, which is never a good idea.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
This is where things start going to crap. "Adds a ultra high speed operating mode.
Sounds like a hack or a trick. You gain operating speed but at what expense?
It seems to me when they cannot truly jump a generation, or unable to advance the technology any further, they slap something on the old to make it seem newer but it isn't really.
Like 3G, 4G, 4G LTE bs.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Neither AMD nor Nvidia will use HBM on anything but their highest end chips on first gen finfets. The extra costs on top of the already expensive node jump will be too prohibitive. AMD cannot afford any further margin pressure, seeing as how they are currently selling bigger chips with more expensive PCB's and sometimes 2x the ram for LESS than the competing products.

I wouldn't be so sure. AMD Fury already went on sale for $470-480 and the Nano is already hitting $510. While not guaranteed, it's in the realm of possibility that a shrunken Fury/Nano level card could still use HBM1 and cost $379-399 next gen. AMD already did the HBM controller design and the Fiji chip design. I think for AMD it makes a lot more sense to just keep using HBM1 because the alternative is redesigning the Fiji chip with a new GDDR5X memory controller (huge unnecessary expense) or dropping Fiji entirely from next gen GPU line-up (sounds way way too expensive for AMD to just do that). AMD is not the old ATI - they don't have the funds to do full top-to-bottom line-up anymore.

7950/7970/7970Ghz (High-end) -> next gen mid-range R9 280/280X
R9 290/290X (High-end) -> next gen mid-range R9 390/390X

It stands to reason that Fury/Nano/Fury X -> next gen refreshed mid-range R9 490/490X, possibly refreshed on 16nm.

I could be wrong of course but since 2012 AMD is using this strategy.

Don't forget that AMD already made the argument that 4GB of HBM1 isn't a bottleneck on a single Fury X for 4K and they are right. Even Fury X CF isn't bottlenecked in 99% of games @ 4K. What is stopping them from selling a 4GB mid-range HBM1 card targeting 1080P/1440P in 2016?

For NV, however, it's totally different since they have a memory efficient memory compression that wouldn't require HBM1/2 for 970/980 successors. NV also has a better track record in recent memory of hitting higher GDDR5 overclocks/speeds with their cards than AMD could with HD7970/R9 290X/390X.

This is where things start going to crap. "Adds a ultra high speed operating mode.
Sounds like a hack or a trick. You gain operating speed but at what expense?
It seems to me when they cannot truly jump a generation, or unable to advance the technology any further, they slap something on the old to make it seem newer but it isn't really.

Just in case you missed the article:

"The new memory will continue to use traditional component form factor, similar to GDDR5, reducing the burden and complexity of design and manufacturing."


I don't understand why a lot of people on this forum have a tendency to always crap on/downplay next generation memory tech. Whether it's DDR2 -> DDR3, DDR3 -> DDR4, GDDR3 -> GDDR5, GDDR5 -> HBM, it seems we can always count on people who are ready to crap on new memory technology without giving it at a chance.

I recall one certain poster who no longer posts on here who kept trash talking DDR4 from day one and now we have 16GB DDR4 3000 kits selling for $90, DDR4 3200 kits selling for $105, DDR4 3600 kits selling for $170 (about half the price of launch DDR4 2800 kits).

There is a lot of logical reasoning why GDDR5X/6 could make a lot of sense for NV's next gen mid-range chips. We still don't know if it will happen or not but trying to defend outdated GDDR5 gets the industry nowhere in terms of progress.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I wouldn't be so sure. AMD Fury already went on sale for $470-480 and the Nano is already hitting $510.

Do you not see AMD's margins every time they report their financials? They're not making money and are currently not viable as a company if they continue on their current financial trajectory. It's better to move product at slim margins (price cuts) than sit forever on product that won't sell, but their margins are currently not enough to sustain their overhead. Even Nvidia with 55% margins is only making ~$150m per quarter on $4+ billion revenue. Is Fiji selling in any pro cards? I don't think so but I'm not sure and while I'm also not sure of the engineering costs of 1 Fury X card vs. 1 GTX 980 TI card, I'd be willing to bet that Nvidia's costs are lower, even if the comparison leaves off the cooling solutions.

While not guaranteed, it's in the realm of possibility that a shrunken Fury/Nano level card could still use HBM1 and cost $379-399 next gen. AMD already did the HBM controller design and the Fiji chip design. I think for AMD it makes a lot more sense to just keep using HBM1 because the alternative is redesigning the Fiji chip with a new GDDR5X memory controller (huge unnecessary expense) or dropping Fiji entirely from next gen GPU line-up (sounds way way too expensive for AMD to just do that). AMD is not the old ATI - they don't have the funds to do full top-to-bottom line-up anymore.

I've wondered left and right why we haven't seen any GPU shrinks since Nvidia shrank GT200 to GT200b. I think the answer is fairly obvious though; shrinking existing products to new nodes do not reap the full benefit in transistor savings and perf/w savings. I think AMD can build a next-gen GCN product on finfets with better performance metrics in perf/w and perf/transistor than shrinking Fiji. On top of that, current GCN architecture is tapped out with headroom. There just isn't any noticeable headroom left and a node change isn't going to solve that problem.

Don't forget that AMD already made the argument that 4GB of HBM1 isn't a bottleneck on a single Fury X for 4K and they are right. Even Fury X CF isn't bottlenecked in 99% of games @ 4K. What is stopping them from selling a 4GB mid-range HBM1 card targeting 1080P/1440P in 2016?

Just furthter shows me how dumb AMD was in lopping on the extra costs of 8gb VRAM on their 390/390X. They can't make good business / financial decisions.
 
Last edited:

Actaeon

Diamond Member
Dec 28, 2000
8,657
20
76
Is memory bandwidth a problem for current GPUs? I won't complain about getting more bandwidth but seems like the GPU is the primary limitation. I recall a review of Titan X or 980 Ti where they overclocked memory and GPU separately to measure gains and gains from memory was quite low.

Maybe on really really fast GPUs faster memory will help but with the 350GB-400Gb/s bandwidths we are getting today it doesn't seem like a problem.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
That being said, it remains highly debatable if mid-range GPUs in 2016 will even need a 2X bandwidth increase. GDDR5 would cover the performance needs of a midrange GPU with the 980 Ti-equivalent of performance just fine.

I don't think that will remain true. GTX 980's show gain with memory OC, and Nvidia has demonstrated for 3 generations in a row that their mid-range die chip comes with a 256-bit bus. I don't see Nvidia complicating their mid-die chip with a larger bus size, and neither do I see 224gb/s being enough for >= Titan X performance, so the only thing left besides HBM is GDDR5x/GDDR6. AMD is an even worse offender when it comes to bandwidth requirements. Their chips need memory bandwidth higher than 980 TI just to keep up with the GTX 980.
 
Last edited:

Tuna-Fish

Golden Member
Mar 4, 2011
1,480
2,000
136
Where did the GDDR6 nonsense come from?

GDDR5X is an internal name for the product. If it ends up being standardized by JEDEC, it will likely be called GDDR6. Whether it will is another question -- if Micron remains the only manufacturer, it probably won't.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Have Micron said what the power characteristics will be of such highly clocked GDDR?

Edit: I see the plan is to reduce Voltage to 1.35 from 1.5 so should be a modest reduction in power unless there is some other power changes they haven't spelled out.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Have Micron said what the power characteristics will be of such highly clocked GDDR?

Edit: I see the plan is to reduce Voltage to 1.35 from 1.5 so should be a modest reduction in power unless there is some other power changes they haven't spelled out.

There is another slide that says it will be the same power envelope as gddr5.
 

Piroko

Senior member
Jan 10, 2013
905
79
91
This is where things start going to crap. "Adds a ultra high speed operating mode.
Sounds like a hack or a trick.
It is, sort of, to my understanding. It barely changes anything. 16 Gbps transfer speeds will likely have worse tradeoffs than 8 Gbps (2 GHz link speed!) on GDDR5.

Is memory bandwidth a problem for current GPUs?
Higher bandwith per chip = less chips needed = cheaper to produce.

GDDR5X is an internal name for the product. If it ends up being standardized by JEDEC, it will likely be called GDDR6. Whether it will is another question -- if Micron remains the only manufacturer, it probably won't.
GDDR5x is a JEDEC standard, GDDR6 is an invention of fudzilla.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Do you not see AMD's margins every time they report their financials? They're not making money and are currently not viable as a company if they continue on their current financial trajectory. It's better to move product at slim margins (price cuts) than sit forever on product that won't sell, but their margins are currently not enough to sustain their overhead. Even Nvidia with 55% margins is only making ~$150m per quarter on $4+ billion revenue. Is Fiji selling in any pro cards? I don't think so but I'm not sure and while I'm also not sure of the engineering costs of 1 Fury X card vs. 1 GTX 980 TI card, I'd be willing to bet that Nvidia's costs are lower, even if the comparison leaves off the cooling solutions.

Ya but take into account the cost of designing an all-new GPU. Didn't NV throw around #s like it costs them $3-4B to design Maxwell and that took 3-4 years to do? I can't find the exact source but I recall reading a new GPU architecture top-to-bottom costs billions+.

That means the cost of designing a top-to-bottom 16nm stack for AMD and scrapping all Fiji designs could be more cost prohibitive than just shrinking Fiji and adding HDMI 2.0/DP1.3. I am just speculating but just looking at their past.

I've wondered left and right why we haven't seen any GPU shrinks since Nvidia shrank GT200 to GT200b. I think the answer is fairly obvious though; shrinking existing products to new nodes do not reap the full benefit in transistor savings and perf/w savings.

How did you arrive at that idea for NV? Since NV operates completely differently to AMD - they have new architectures every 2 years roughly - what exactly would they have been shrinking? Kepler 600/700 is 1st+ 2ndgen 28nm, Maxwell is 3rd gen 28nm. I am not understanding how did you arrive at the conclusion that it wasn't financial viable to do die shrinks anymore because there was no option for die shrinks since 2012. Since NV had Kepler in 2012, it made no sense to shrink Fermi either. So, it's not an analogy that NV didn't shrink anything since 2010, it's expected they wouldn't' based on how NV operates.

I think AMD can build a next-gen GCN product on finfets with better performance metrics in perf/w and perf/transistor than shrinking Fiji. On top of that, current GCN architecture is tapped out with headroom. There just isn't any noticeable headroom left and a node change isn't going to solve that problem.

They probably could but do they have the $ and engineering resources do do that?

Look at this on HD5000 series:

"4 chips in 6 months.

This is the schedule AMD’s GPU engineering teams committed themselves to for the launch of the Evergreen family. The entire family from top to bottom would be launched in a 6 month period."

Since Raja already announced just 2 new chips for AI, it doesn't seem like AMD is going to be doing a clean slate for the entire stack. I predict they will re-use something from today's gen and the most logical to me is Fiji since its performance level is already = next gen's mid-range. The shrink gives them the reduction in power usage and slight redesign to add more modern UVD and HDMI 2.0/DP1.3 is all they need.

Maybe I am wrong so we'll see.

Just furthter shows me how dumb AMD was in lopping on the extra costs of 8gb VRAM on their 390/390X. They can't make good business / financial decisions.

Yup, worse yet they could have launched R9 380/380X/390/390X as early as January 2015 and sold them alongside R9 200 series.

Now unlike NV buyers who couldn't care less about 3.5GB of VRAM on the 970, I bet any $ if AMD pulls the stunt of using Fiji 4GB HBM1 cards against 6-8GB 970/980 successors with GDDR5X, they are going to get run over from a marketing perspective. Any advantages on the AMD side wrt to VRAM are always downplayed while any advantage on NV side are always double-downed as the most important for that gen.

That's why AMD could be screwed if the go shrunken Fiji HBM1 4GB chips but if they go GDDR5X and make all new GPUs and throw away all Fiji designs, that means they wasted a lot of $ on Fiji since it was a failed design from a sales point of view and costs of its design wouldn't even be recouped in the future gen. What do you think are the chances of that happening? AMD reused Pitcairn, Tahiti and Hawaii designs for more than 1 generation but now they are going to throw away the entire Fiji HBM1 designs? I am not buying it.

I think AMD may just have to shrink Fiji HBM1 and use higher GPU clocks to get some price/performance advantage & perf. advantage what is likely to be a 6-8GB GDDR5X competitor or they might need to figure out some way to add 8GB of VRAM with 2nd gen HBM1 tech.
 

SolMiester

Diamond Member
Dec 19, 2004
5,330
17
76
Now unlike NV buyers who couldn't care less about 3.5GB of VRAM on the 970, I bet any $ if AMD pulls the stunt of using Fiji 4GB HBM1 cards against 6-8GB 970/980 successors with GDDR5X, they are going to get run over from a marketing perspective. Any advantages on the AMD side wrt to VRAM are always downplayed while any advantage on NV side are always double-downed as the most important for that gen.

Thats probably because NV isnt as sensitive to memory bandwidth & amount as AMD historical have been.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
How did you arrive at that idea for NV? Since NV operates completely differently to AMD - they have new architectures every 2 years roughly - what exactly would they have been shrinking? Kepler 600/700 is 1st+ 2ndgen 28nm, Maxwell is 3rd gen 28nm. I am not understanding how did you arrive at the conclusion that it wasn't financial viable to do die shrinks anymore because there was no option for die shrinks since 2012. Since NV had Kepler in 2012, it made no sense to shrink Fermi either. So, it's not an analogy that NV didn't shrink anything since 2010, it's expected they wouldn't' based on how NV operates.

I arrived at my "idea" because neither camp has shrank an existing GPU onto a more advanced node since Nvidia did with GT200 in January 2009. That was nearly 7 years ago, a forever long time in the ultra fast moving GPU world. Has AMD ever shrunk an existing GPU to a more advanced node? History has shown both companies have been lately unwilling to shrink any existing GPU's to re-live alongside new GPU's. I don't see that changing, and I particularly doubt Fiji is a viable candidate. Fury X is only 13% faster than a GTX 980 at 1080p and 22% faster at 1440p. It's taking Fiji 50% more die space and 71% more transistors to only squeak out 13% and 22% vs. a GTX 980. Surely (not 100% sure, but confident) Pascal will feature a better perf/transistor than Maxwell, making Fiji all the more out of place even with a die shrink.
 
Last edited:

MrTeal

Diamond Member
Dec 7, 2003
3,611
1,813
136
I arrived at my "idea" because neither camp has shrank an existing GPU onto a more advanced node since Nvidia did with GT200 in January 2009. That was nearly 7 years ago, a forever long time in the ultra fast moving GPU world. Has AMD ever shrunk an existing GPU to a more advanced node? History has shown both companies have been lately unwilling to shrink any existing GPU's to re-live alongside new GPU's. I don't see that changing, and I particularly doubt Fiji is a viable candidate. Fury X is only 13% faster than a GTX 980 at 1080p and 22% faster at 1440p. It's taking Fiji 50% more die space and 71% more transistors to only squeak out 13% and 22% vs. a GTX 980. Surely (not 100% sure, but confident) Pascal will feature a better perf/transistor than Maxwell, making Fiji all the more out of place even with a die shrink.

I believe 65nm to 55nm was a pretty special case simply due to the process. My recollection (which could certainly be wrong) is that an existing design could be pretty much be directly ported to 55nm with almost no reengineering. I'm not even positive you needed a new mask set. Simple shrinks like that don't seem to be possible anymore, so if you need to completely redo the physical design for the new node you might as well rejig the design to incorporate your newest features.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
I believe 65nm to 55nm was a pretty special case simply due to the process. My recollection (which could certainly be wrong) is that an existing design could be pretty much be directly ported to 55nm with almost no reengineering. I'm not even positive you needed a new mask set. Simple shrinks like that don't seem to be possible anymore, so if you need to completely redo the physical design for the new node you might as well rejig the design to incorporate your newest features.

55nm was a half node process of 65nm. It was almost 90% the same as 65nm with same libraries and you only had to port the design from 65nm to 55nm without redesigning it.

I believe the same applies for GloFos 28nm as it is a half node process over its 32nm.

But going from 28nm Planar to 16nm FF needs a 100% redesigning because those two node processes are completely different with different rules and libraries.
So it is more efficient to get a new GPU architecture for the 16nm FF than redesign the old one from scratch for the 16nm FF node.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
I arrived at my "idea" because neither camp has shrank an existing GPU onto a more advanced node since Nvidia did with GT200 in January 2009. That was nearly 7 years ago, a forever long time in the ultra fast moving GPU world. Has AMD ever shrunk an existing GPU to a more advanced node? History has shown both companies have been lately unwilling to shrink any existing GPU's to re-live alongside new GPU's. I don't see that changing, and I particularly doubt Fiji is a viable candidate. Fury X is only 13% faster than a GTX 980 at 1080p and 22% faster at 1440p. It's taking Fiji 50% more die space and 71% more transistors to only squeak out 13% and 22% vs. a GTX 980. Surely (not 100% sure, but confident) Pascal will feature a better perf/transistor than Maxwell, making Fiji all the more out of place even with a die shrink.

Nvidia did shrink Fermi to 28nm. GF117.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Call me a sheep, but the name GDDR6 entices me far more than GDDR5X.
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
If there are actual technical changes, beyond a node shrink and speed increase, then I'm ok with it being called GDDR6. That looks to be the case with the doubled prefetch.

"Gee Dee Dee Ar Six" is also one less syllable than "Gee Dee Dee Ar Five Ecks" ;)

I couldn't help it.
latest
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You're right and it more than likely will be called GDDR6 because of this.

December 16, 2015:

"Earlier this week, reports began circulating surrounding the existence of GDDR6 memory for future graphics cards. The news originated from news site, Fudzilla, which claimed to be in contact with sources familiar with the matter but it seems that right now, Micron is looking to squash any talk about GDDR6 and make its plans for 2016 perfectly clear.

While the original report claimed that we would see GDDR6 in 2016, Micron has sent out a statement to various press outlets (including KitGuru) to clarify that it will only be launching GDDR5X next year, which is currently tipped to be used on future Nvidia Pascal graphics cards and some of AMD’s future GPUs as well, in addition to HBM 2.

“The new memory advancements coming from Micron in 2016 are going to be called GDDR5X, not GDDR6. GDDR5X and GDDR6 are not the same product and Micron has not announced any plans involving GDDR6.”

The memory maker also went on to say that while GDDR5X is coming, it is not intended to be a competitor to HBM: “GDDR5X is intended to provide significant performance improvements to designs that are currently using GDDR5, therefore giving system designers the option of delivering enhanced performance without dramatically altering current architectures”.

Kit-Guru:

That last part makes a lot of sense, particularly since we have been hearing rumours that HBM 2 will be reserved for future high-end graphics cards, like the next Fury or Titan, while GDDR5X will be used to provide a boost on lower-tier cards, that would have been designed for GDDR5 anyway.

Source