[TweakTown] HBM2 could be delayed.

SimianR

Senior member
Mar 10, 2011
609
16
81
If Arctic Islands and Pascal were both designed around using HBM2 - would they simply delay the release of the cards? With the memory being right on the die I can't imagine you can simply redesign the gpu for use with regular DDR5. I also can't imagine that AMD wants to release more cards limited to 4GB in 2016 if they had to stick with HBM1.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
I wouldn't be surprised if it was delayed or anything about next gen chips being delayed. I planned a whole year of gaming in 2016 for my current gpu. So I'm not too worried but if everything goes well and there is a card I want then I'll get it in 2016. But I've got about 10-12 games lined up for 2016 that will last me far longer than the whole year.

Delays seem to be the norm with these node shrinks being far harder than ever before.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
meh. lower end cards were probably going to end up with specs of "HBM1" for both AMD and nvidia anyway.
 

dark zero

Platinum Member
Jun 2, 2015
2,655
140
106
If HBM2 is delayed, NVIDIA would be forced to go to GDDR5... And seeing that Hynix is on trouble with HBM2, Samsung will face some delays too, also it means that there will be a notorious shortage of cards.

Oh boy... This is a really bad scenario for nVIDIA and AMD. Intel has the golden chance to take then out of the equation. Since both AMD and nVIDIA will suffer a big shortage of HBM 2, and they heavily depends their new uarchs on that tech...
 

MrTeal

Diamond Member
Dec 7, 2003
3,916
2,700
136
That article makes some pretty strange claims. If there are issues with HBM2 yields and delays, why would that cause major issues for AMD while not impacting Pascal which is also "rocking HBM2"?

Their claim that 4GB is simply not enough for 4k gaming is a little odd as well, when they had framebuffer issues in tri-fire at 11,520 x 2160. That's not much better than saying FuryX isn't enough for 1440p gaming since it bogs down if I try to drive a wall of 6 panels at 8640x2560.
 

Good_fella

Member
Feb 12, 2015
113
0
0
If HBM2 is delayed, NVIDIA would be forced to go to GDDR5... And seeing that Hynix is on trouble with HBM2, Samsung will face some delays too, also it means that there will be a notorious shortage of cards.

Oh boy... This is a really bad scenario for nVIDIA and AMD. Intel has the golden chance to take then out of the equation. Since both AMD and nVIDIA will suffer a big shortage of HBM 2, and they heavily depends their new uarchs on that tech...

According this rumor HBM2 isn't delayed.

Our source reached out to us tody, saying that they "wouldn't count on [AMD] using HBM2 next year" (2016), but wouldn't elaborate further. This is an interesting rumor, because if it were true, it would mean that the use of HBM2 would shift primarily to NVIDIA (2016).
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Soooo....what?

Few weeks ago it was something along the lines of AMD has the whole HBM2 allocation and they aren't going to give jack to Nvidia.

Now it's AMD isn't going to use HBM2 and Nvidia has first dibs?

Man, so many rumors and junk. Hard to keep up.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
That article makes some pretty strange claims. If there are issues with HBM2 yields and delays, why would that cause major issues for AMD while not impacting Pascal which is also "rocking HBM2"?

That's a huge point, unless NV went in and bought out almost all the HBM2 allocation for Pascal since they have nearly 80% market share and the financially ability to pull something like that off.

Their claim that 4GB is simply not enough for 4k gaming is a little odd as well, when they had framebuffer issues in tri-fire at 11,520 x 2160.

Ya, and it also contradicts real world testing.

Benchmarks in 11,520 × 2,160:

980Ti SLi is only 6% faster than Fury X CF.

If 4GB was such an issue, wouldn't 980Ti SLi be winning by massive amounts on average?

Not to mention that the article contradicts itself by saying that AMD might be able to source HBM2 for flagship cards but that's it. But low-end and mid-range cards could also work with just HBM1.

I actually think if AMD did a 16nm shrink of Fury/Nano/Fury X and priced those at $299-399, they could give themselves 6-8 months for 8GB HBM2 availability to come online. I am pretty sure at those prices those cards would sell in 2016.

Now that I think about it, look at what happened since 2011:

$500-550 7970/7970Ghz flagship -> became mid-range $299 R9 280X
$250-450 HD7850/HD7870/7950 mid-range to upper mid-range -> became low-end $150-250 R7 265/R9 270/270X/280.

If AMD keeps repeating this strategy, we could see Fury/Nano/Fury X become next generation's mid-range to upper-mid-range cards. From a risk point of view and cost, it makes little sense to just scrap Fury/Nano/FuryX designs since with a node shrink alone their perf/watt could be improved substantially.

The reality is I don't see AMD competing with Pascal next generation since even with HBM1, they still couldn't beat Maxwell in perf/watt but Maxwell had just GDDR5. Seems AMD used up a huge perf/watt advantage of HBM1 already and HBM2 should help them a lot less than for NV to move from GDDR5 to HBM2 with Pascal in 1 go. The only way I see AMD regaining some market share next year if they launch products earlier than Pascal and have some kind of a cohesive mobile dGPU strategy.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I actually think if AMD did a 16nm shrink of Fury/Nano/Fury X and priced those at $299-399, they could give themselves 6-8 months for 8GB HBM2 availability to come online. I am pretty sure at those prices those cards would sell in 2016.

I have wondered why we haven't seen any GPU die shrinks from either company in a long while. I think the last product Nvidia shrank was GTX280 (GT200) to GTX285 (GT200B). I'm not sure about AMD though. It is quite possible that the economics of shrinking an existing chip is on shaky ground with respect rising costs of new nodes. Since existing chips and/or older architectures are optimized for the nodes they are made on, a straight shrink (aka minimal R&D) without major overhauls to better utilize the existing architecture on a more advanced node (aka noticeable R&D invested back into an older architecture) would not realize the full benefit of moving to a new node, thus wiping much (or all) of the cost savings moving to a new node.

The reality is I don't see AMD competing with Pascal next generation since even with HBM1, they still couldn't beat Maxwell in perf/watt but Maxwell had just GDDR5. Seems AMD used up a huge perf/watt advantage of HBM1 already and HBM2 should help them a lot less than for NV to move from GDDR5 to HBM2 with Pascal in 1 go. The only way I see AMD regaining some market share next year if they launch products earlier than Pascal and have some kind of a cohesive mobile dGPU strategy.

Su said finfet will give them a 2x perf/w increase over their current GCN architecture (was said in reference to Hawaii before Fiji came out). 2x perf/w over Hawaii (without factoring in power consumption from memory) + additional power savings of HBM2 will hopefully allow them to be competitive. However, you could be right. Maxwell could be the beginning of Nvidia's higher and more focused R&D development paying off, with Pascal snowballing the results. :/
 
Last edited:

Techhog

Platinum Member
Sep 11, 2013
2,834
2
26
So, no new cards next year?

Can AMD sell their next GPU architecture to anyone? Fiji is a failure, so being stuck with it for another year and a half means that AMD won't have enough money to market or release the next generation of cards. It's over for them. $1000 GP104 incoming.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I have wondered why we haven't seen any GPU die shrinks from either company in a long while.

Cost is probably a major factor. The other is simply lack of lower nodes. HD7970 was announced December 2011 on 28nm. We are still on 28nm and it's soon December 2015. What could they have shrunk their product to?

In the case of Fermi GTX480->580, I don't think 28nm was even available. Once it was available, NV did move from 40nm to 28nm with Kepler. So I am not sure where a stop-gap 28nm shrink would have made sense. It wouldn't have made any sense for NV to shrink Fermi to 28nm if they already designed Kepler for that node.

Su said finfet will give them a 2x perf/w increase over their current GCN architecture (was said in reference to Hawaii before Fiji came out). 2x perf/w over Hawaii (without factoring in power consumption from memory) + additional power savings of HBM2 will hopefully allow them to be competitive.

Other than to never take AMD's marketing estimates at face value, even if we do, AMD needs more than that. Look at an after-market 980Ti vs. 290X in https://www.techpowerup.com/reviews/AMD/R9_Nano/28.html"]power usage[/URL]. An after-market 980Ti still uses less.

Now the performance difference:

59% faster at 1440P, 59% faster at 4K


Let's assume AMD gets 2X perf/watt over 290X with R9 490X and NV delivers 2X the perf/watt with Pascal. What do you get? Game over AMD.

AMD would need to improve perf/watt 3.18X over 290X to match Pascal's 2X over 980Ti.

Trying to equate power usage closer to be fair to NV:
R9 290X = 63% with 2X perf/watt => 126%
R9 490X = 63% of 290X with 3.18X perf/watt => 200%
after-market GTX980Ti = 100% with 2X perf/watt => 200%

Just how bad is it?

It means if NV improves perf/watt just 30% over 980Ti, it's already enough to at least match AMD's 2X perf/watt increase over 290X. :eek:

That means AMD may need to aim for 2X perf/watt over Fury X, not 290X, or we are talking < 15% market share and competing on price/perf again. :biggrin:

However, you could be right. Maxwell could be the beginning of Nvidia's higher and more focused R&D development paying off, with Pascal snowballing the results. :/

Esp. if NV follows through with an official split of Compute vs. Graphics (see Titan X's review section on FP64 strategy for NV). If NV continues to make pure graphics chips, Pascal will be a monster. I see 0 chance for AMD winning next generation, literally 0.

In fact, with Maxwell, NV has the worst sub-$280 line-up in its history, and it's completely destroying AMD's cards.

Out of curiosity, I would love to see R9 390 at $99, R9 390X at $149, Fury at $249, Nano at $299, Fury X at $349. Even with those prices effectively tomorrow, AMD would not get to 50% market share in 6 months guaranteed.

The only way for AMD to make big moves happen is to have a new mobile dGPU strategy, and both outperform + undercut every single NV card on the desktop. That's not going to happen!

We all should have seen it coming though when AMD offered hands down the best price/performance for like 5 consecutive generations and that didn't even make a dent in NV's market share. Even when AMD delivered faster products with DH5870, HD7970Ghz, HD6990/7990, R9 295X2, it still didn't matter.

If AMD is behind Pascal to roll-out next generation, they are screwed because next year the wave of 2011-2012 GPU owners will finally upgrade for sure, and that's going to be the holds outs on 6xxx/7xxx series. If AMD fails to convert those upgraders, the next time they'll have a chance to attract them is in 2.5-4 years from 2016 given that gamers nowadays don't seem to upgrade their GPUs as often as in the past.

IMO, since I predict AMD to have 0 chance of matching Big Pascal anyway, their best chance is to launch early aka HD5850/5870 strategy, but this strategy won't work because there is no way NV will be 6 months behind and both AMD/NV are not going to be able to accelerate launching 16nm GPUs since they are at the mercy of TSMC's 16nm roll-out. So no advantage for AMD imo.
 
Last edited:
Feb 19, 2009
10,457
10
76
HBM1 has faced major hurdles, so what makes people think HBM2 will be smooth sailing?

Samsung is onboard but hey have NO experience. Good luck with that being on time.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Really? You know much about Arctic Island design?

With lower power usage than Fury X, an after-market 980Ti is 25% faster at 1440P and 19% at 4K -- that's with ancient and inefficient GDDR5. I bet NV's engineering teams spent 3-4 years designing Pascal based on estimates of how long it takes to design a new architecture from JHH himself on past architectures.

NV is coming in with a history of flagship wins going back to G80, proven track record of making massive die flagship cards vs. AMD's 1st ever with Fiji. NV has ~80% market share which means it will cost them less to buy more wafers than it would for AMD (i.e., it'll cost NV less to manufacture a chip of similar size due to better economies of scale in the supply chain), which makes it much easier for NV to lower prices should they need to or alternatively it's much easier for NV to push larger die sizes since they can afford to. NV has most of the mobile dGPU market/OEMs locked in. AMD will need to produce something truly special to entice OEMs to start selling AMD graphics cards in laptops. Look at the insane lead 980M/980 have in laptops over R9 M295X - AMD would need 3-4X increase in perf/watt over R9 M295X just to catch up.

With 980Ti, NV didn't even need HBM1 and an AIO CLC that lowers power usage too to win. Assuming Pascal and Arctic Islands have similar IPC, NV should still win since they are moving from GDDR5 to HBM2 while AMD is only moving from HBM1->2.

Plus, look at the massive deficit in tessellation that Fury X still has compared to even the 780Ti.

tessmark.gif


XDMA advantage may also disappear as I expect NV to fully transition to bridge-less SLI.
 

Magee_MC

Senior member
Jan 18, 2010
217
13
81
Cost is probably a major factor. The other is simply lack of lower nodes. HD7970 was announced December 2011 on 28nm. We are still on 28nm and it's soon December 2015. What could they have shrunk their product to?

In the case of Fermi GTX480->580, I don't think 28nm was even available. Once it was available, NV did move from 40nm to 28nm with Kepler. So I am not sure where a stop-gap 28nm shrink would have made sense. It wouldn't have made any sense for NV to shrink Fermi to 28nm if they already designed Kepler for that node.



Other than to never take AMD's marketing estimates at face value, even if we do, AMD needs more than that. Look at an after-market 980Ti vs. 290X in https://www.techpowerup.com/reviews/AMD/R9_Nano/28.html"]power usage[/URL]. An after-market 980Ti still uses less.

Now the performance difference:

59% faster at 1440P, 59% faster at 4K


Let's assume AMD gets 2X perf/watt over 290X with R9 490X and NV delivers 2X the perf/watt with Pascal. What do you get? Game over AMD.

AMD would need to improve perf/watt 3.18X over 290X to match Pascal's 2X over 980Ti.

Trying to equate power usage closer to be fair to NV:
R9 290X = 63% with 2X perf/watt => 126%
R9 490X = 63% of 290X with 3.18X perf/watt => 200%
after-market GTX980Ti = 100% with 2X perf/watt => 200%

Just how bad is it?

It means if NV improves perf/watt just 30% over 980Ti, it's already enough to at least match AMD's 2X perf/watt increase over 290X. :eek:

That means AMD may need to aim for 2X perf/watt over Fury X, not 290X, or we are talking < 15% market share and competing on price/perf again. :biggrin:



Esp. if NV follows through with an official split of Compute vs. Graphics (see Titan X's review section on FP64 strategy for NV). If NV continues to make pure graphics chips, Pascal will be a monster. I see 0 chance for AMD winning next generation, literally 0.

In fact, with Maxwell, NV has the worst sub-$280 line-up in its history, and it's completely destroying AMD's cards.

Out of curiosity, I would love to see R9 390 at $99, R9 390X at $149, Fury at $249, Nano at $299, Fury X at $349. Even with those prices effectively tomorrow, AMD would not get to 50% market share in 6 months guaranteed.

The only way for AMD to make big moves happen is to have a new mobile dGPU strategy, and both outperform + undercut every single NV card on the desktop. That's not going to happen!

We all should have seen it coming though when AMD offered hands down the best price/performance for like 5 consecutive generations and that didn't even make a dent in NV's market share. Even when AMD delivered faster products with DH5870, HD7970Ghz, HD6990/7990, R9 295X2, it still didn't matter.

If AMD is behind Pascal to roll-out next generation, they are screwed because next year the wave of 2011-2012 GPU owners will finally upgrade for sure, and that's going to be the holds outs on 6xxx/7xxx series. If AMD fails to convert those upgraders, the next time they'll have a chance to attract them is in 2.5-4 years from 2016 given that gamers nowadays don't seem to upgrade their GPUs as often as in the past.

IMO, since I predict AMD to have 0 chance of matching Big Pascal anyway, their best chance is to launch early aka HD5850/5870 strategy, but this strategy won't work because there is no way NV will be 6 months behind and both AMD/NV are not going to be able to accelerate launching 16nm GPUs since they are at the mercy of TSMC's 16nm roll-out. So no advantage for AMD imo.

This doesn't make sense to me. What we have found out recently is that a large part of the reason that GCN has been less power efficient than the NV offerings is because there have been large parts of the GCN architecture that have been unused with DX11. However with the advent of DX12, GCN is finally starting to show its legs. The preliminary results so far show GCN gaining up to 20% performance compared to Maxwell, and while preliminary, this does show that GCN has more in the tank for perf/watt improvements.

In order for Pascal to be on the same level as GCN for DX12, I would expect that they would have to go to hardware solutions for things like AC, which would increase their power expenditures. If they don't then they will be possibly giving up significant performance to GCN.

Look at it this way, in order for your scenario to come true, NV would have to make the node shrink, new archetecture and switch to HBM2 without any hiccups in any of those challenges. AMD on the other hand only has to make the node shrink, since Arctic Islands is just another extension of GCN and they already have experience in rolling out HBM.

I don't think that AMD is going to have a cakewalk, but I think that it's way too early to start playing Taps for their next generation.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
In fact, with Maxwell, NV has the worst sub-$280 line-up in its history, and it's completely destroying AMD's cards.

You're speaking in respect to perf/$, right? If you're talking technological terms, I think right now is the first time in FOREVER that Nvidia is winning in perf/mm2 and perf/transistor in the performance/low end segment.

We all should have seen it coming though when AMD offered hands down the best price/performance for like 5 consecutive generations and that didn't even make a dent in NV's market share. Even when AMD delivered faster products with DH5870, HD7970Ghz, HD6990/7990, R9 295X2, it still didn't matter.

I would like to point out that of all those cards, only the HD5870 was in the unique position of being a mass-market card and having an ample performance lead / release time advantage. The HD6990/7990 were roughly equal to their competition and were in price ranges at release that weren't compelling to move large volume and the R9 295x2 was also in a price range that didn't matter to 99.999% of the market.

Perhaps the HD5870 suffered it's relegated fate because of the reputation Nvidia had accumulated with the GTX8800 and subsequent G90/G92b cards. Had potential buyers known at HD5870's release that Fermi was going to be late, hot, and only 15% faster for another $150 they likely would not have waited.

If AMD is behind Pascal to roll-out next generation, they are screwed

AMD is likely screwed then. I don't see AMD beating Nvidia out of the gate to FF.
 
Mar 10, 2006
11,715
2,012
126
HBM1 has faced major hurdles, so what makes people think HBM2 will be smooth sailing?

Samsung is onboard but hey have NO experience. Good luck with that being on time.

Yeah, world's largest and successful memory company will trip over itself building HBM ;)
 

dark zero

Platinum Member
Jun 2, 2015
2,655
140
106
Yeah, world's largest and successful memory company will trip over itself building HBM ;)
When a tech is totally new, and even they didn't learned about teh previous gen, do you expect something great?

Anyways... it will benefit Intel a LOT. They can show to them that HMC is the future and not HBM
 
Mar 10, 2006
11,715
2,012
126
When a tech is totally new, and even they didn't learned about teh previous gen, do you expect something great?

Anyways... it will benefit Intel a LOT. They can show to them that HMC is the future and not HBM

Micron is the one that sells HMC, not Intel.
 

zlatan

Senior member
Mar 15, 2011
580
291
136
http://www.tweaktown.com/news/47773/amd-trouble-sourcing-hbm2-next-gen-video-cards/index.html

I pretty much ignored the speculative part of the article. So the short context seems to be that HBM2 may be delayed into 2017.

We still have to see more sources on this. But I wouldn't be surprised if HBM2 slipped 2 quarters or more. It was previous to be around Q3 2016.

If we count the current roadmaps HBM2 will be available in Q1 2016. But I think this is a very aggressive plan, and I may say it will only available at the middle of the year.
The harder part is the interposer. For HBM2 it must be clocked to at least 800MHz, which will be really hard for the bigger chips. Even if the HBM2 will available in time, there is a huge chance that the interposers won't.