• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[Kitguru ] AMD’s partners cannot get enough Radeon R9 Fury graphics cards –

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Speaking about AMD potentially not having enough cards...though, nobody knows where the issue is at, is it Hynix, poor yields, or, it just takes longer to validate cards.

http://www.digitimes.com/news/a20150714PD212.html
Taiwan Semiconductor Manufacturing Company (TSMC) has slashed its prices on 28nm and 20nm process technologies by 5-10% in order to secure orders from major clients including Qualcomm and MediaTek, according to industry sources.

Since AMD uses TSMC for all their GPUs, you would think that AMD would be buying all the extra wafers it can get its hands on to meet the demand for their Fury line of cards, but, I doubt they would pass the savings to the consumer.

Nvidia could also make more cards, and then drop prices by 5-10% to put the squeeze on AMD.

Which means, in the next 2 months things will look very interesting with possible price cuts coming.
 
Last edited:
Could it be that the reason NVidia is delaying adopting HBM is that they simply don't need it at the moment? The GTX980/980Ti with GDDR5 are pretty competitive with the Fury/Fury X with less risk and better margins, and oftimes being last to market offers the chance to do it right with less risk.

Maxwell was supposed to have it but it got pushed back to Pascal. It's likely because either the tech just wasn't ready yet or nVidia wasn't in a technical position to be able to use it. Remember, AMD has intimate knowledge of HBM. nVidia, not so.
 
Maxwell was supposed to have it but it got pushed back to Pascal. It's likely because either the tech just wasn't ready yet or nVidia wasn't in a technical position to be able to use it. Remember, AMD has intimate knowledge of HBM. nVidia, not so.

You're probably right. How bout HBM2?

http://www.legitreviews.com/nvidia-pascal-gpu-with-hbm2-taped-out-gp100_165483

http://www.kitguru.net/components/g...ly-taped-out-on-track-for-2016-launch-rumour/

http://www.techpowerup.com/213254/nvidia-tapes-out-pascal-based-gp100-silicon.html

Up to 32GB of HBM2 sounds sweet.
 
Last edited:
Interesting, thanks.


Could it be that the reason NVidia is delaying adopting HBM is that they simply don't need it at the moment? The GTX980/980Ti with GDDR5 are pretty competitive with the Fury/Fury X with less risk and better margins, and oftimes being last to market offers the chance to do it right with less risk.

Could be that. Or maybe HBM wasnt going to be ready in the quantities needed by the time Maxwell dropped. Or maybe first gen limitation of 4GB also got in the way of their planned lineup. It sounds like HBM2 solves the capacity issue and will be used in Pascal.
 
Back
Top