• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Nvidia cuts prices on all their GPUs in the 900 Series

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Remember this article?
http://www.gamersnexus.net/gg/2114-chris-roberts-star-citizen-on-dx12-vulkan-and-tech
"Both Dx12 and Vulkan, formerly “OpenGL Next,” benefit games of large-scale nature by improving lower-end hardware performance. For the time being, games will still be split between Dx11 and Dx12 support, so we may not see the full effect of migration for a while yet. Asking Roberts about this, the CIG CEO confirmed for us that pipeline tuning for Dx12 would still yield performance gains for Dx11 users – users on older Windows installs and hardware:"

DX12 won't magically change the world right now.

"“It's pretty easy [to integrate Dx12] if you do it the same way you did with Dx11, but you're not going to get the full power. The issue is that most game engines were not really written with a massively parallel architecture in mind, and that's fundamentally what you need to do to really get the best benefit out of the next-generation of graphics APIs. [That nex generation] is just that you can be feeding lots of stuff at the same time to the graphics card and you're not bottlenecked by just one thread."

So we're not just waiting for DX12 games for Fury X to be some magical amazing GPU. We're waiting for well written DX12 engines that play to Fury X strengths.

*Looks at Assassin's Creed Unity and Ubisoft games in general*

If you think Fury X is going to change the world when games like these are still being made just because DX12 is now out (and not even in any fully released games yet), then just lol.

Well that's because I just read none of AMD's cards are being shown with their full potential. That classic "Wait and see" trademark lives on.

Eventually, these cards will shine! So, every one should gobble them up day 1 at their highest MSRP so poor AMD can report a win - ignore that today these aren't that polished. But maybe tomorrow, you'll see!
 
Well that's because I just read none of AMD's cards are being shown with their full potential. That classic "Wait and see" trademark lives on.

Eventually, these cards will shine! So, every one should gobble them up day 1 at their highest MSRP so poor AMD can report a win - ignore that today these aren't that polished. But maybe tomorrow, you'll see!
it does matter for people who keep their cards for more than 2 years. 😀 there are plenty of pc builders on reddit who ask for a gaming machine for 4-5 years. in those cases, if they don't want to spend 600$ for a 980 ti, I always tell them to go for a amd card. there is a super high chance of 970 becoming crap within the year.
 
...there is a super high chance of 970 becoming crap within the year.

So you openly admit you have absolutely no idea what you are talking about then.

Your prerogative.

Warning issued for Personal attack. -Shmee
 
Last edited by a moderator:
Fury X shouldn't be priced near custom 980Ti simply because it's not competitive on performance in single-GPU setups.

That much is clear and AMD should price it accordingly.
 
And it turned out to be BS. Even AMD sponsored AOTS shows NVidia in front. Hard times for the ADF.

Which part is BS? The part where NV GPUs don't actually parallel process compute & graphics queues? Cos that's 100% fact with GPU viewing software used by developers confirming it takes the scheduled AC call and runs it in... serial mode.

Now, the other part that's BS? Like how Oxide publicly said they disabled Async Compute in AOTS at NV's request? There's plenty of sources on this topic if you want to inform yourself.

Or the part where in Fable, the devs said ~5% of the compute is running in AC mode. That at best translate into a 5% uplift for GPUs that can do it compared to ones that cannot.

I'm amazed at how some people can think DX12 won't benefit AMD hardware when they were the ones behind all the next-gen APIs.

Just like awhile ago there was so much denalism from people who claim NV GPU can do AC, or its got 32 queues and its better at it than GCN... what a sad joke, even AnandTech was fed lies from NV when they published their article on it, miss-claiming GCN to only support 8 queues compared to Maxwell's 32..

Let me just ask you people in the anti-AMD camp.

What is the normal performance stack for AMD vs NV GPU tiers in games made with Unreal Engine 4, the one engine that is NV sponsored with full PhysX integration? Is it the same? ie. 390X = 980? Or is it similar to Project Cars where R290X/390X is 25-50% behind the 980?

Fact: It's the latter. Every single PC game released on UE4 so far has shown a major performance deficit for AMD GPUs.

Then comes along the first DX12 build of UE4, in Fable.. and bam, suddenly the R290X/390X is beating the 980. Fury X which is normally 10-15% slower than the 980Ti (reference!) at 1080p, now ties it.

No advantage for AMD with DX12? Yeah right.

In about 6 months when the first AAA DX12 titles arrive, I will be proven correct, just like months ago when I was one of the few who claimed AT's Async Compute article was wrong, that Maxwell does not support 32 graphics + compute queues, just 1 graphics OR 32 compute and GCN doesn't just have 1 graphics + 8 compute, but in fact, 1 graphics + 64 compute. They still haven't corrected their misleading article.
 
Which part is BS? The part where NV GPUs don't actually parallel process compute & graphics queues? Cos that's 100% fact with GPU viewing software used by developers confirming it takes the scheduled AC call and runs it in... serial mode.

Now, the other part that's BS? Like how Oxide publicly said they disabled Async Compute in AOTS at NV's request? There's plenty of sources on this topic if you want to inform yourself.

Or the part where in Fable, the devs said ~5% of the compute is running in AC mode. That at best translate into a 5% uplift for GPUs that can do it compared to ones that cannot.

I'm amazed at how some people can think DX12 won't benefit AMD hardware when they were the ones behind all the next-gen APIs.

Just like awhile ago there was so much denalism from people who claim NV GPU can do AC, or its got 32 queues and its better at it than GCN... what a sad joke, even AnandTech was fed lies from NV when they published their article on it, miss-claiming GCN to only support 8 queues compared to Maxwell's 32..

Let me just ask you people in the anti-AMD camp.

What is the normal performance stack for AMD vs NV GPU tiers in games made with Unreal Engine 4, the one engine that is NV sponsored with full PhysX integration? Is it the same? ie. 390X = 980? Or is it similar to Project Cars where R290X/390X is 25-50% behind the 980?

Fact: It's the latter. Every single PC game released on UE4 so far has shown a major performance deficit for AMD GPUs.

Then comes along the first DX12 build of UE4, in Fable.. and bam, suddenly the R290X/390X is beating the 980. Fury X which is normally 10-15% slower than the 980Ti (reference!) at 1080p, now ties it.

No advantage for AMD with DX12? Yeah right.

In about 6 months when the first AAA DX12 titles arrive, I will be proven correct, just like months ago when I was one of the few who claimed AT's Async Compute article was wrong, that Maxwell does not support 32 graphics + compute queues, just 1 graphics OR 32 compute and GCN doesn't just have 1 graphics + 8 compute, but in fact, 1 graphics + 64 compute. They still haven't corrected their misleading article.

That's all well and good building for the future but the problem is people are buying GPUs for NOW. So whoop, AMDs current lineup might have an advantage over Maxwell in DX12 but there are new GPUs coming out in a few months and i'd bet that Nvidia will have them optimised for the new DX12 stuff.

Nvidia is crushing the market because they market their products well and they build for what people want today, not in 5 years time.
 
And it turned out to be BS. Even AMD sponsored AOTS shows NVidia in front. Hard times for the ADF.

I wouldn't expect the AOTS end result to be common

1. they were not using what makes AMD strong in dx12 to a great extent.
2. It took a while with devs working with nvidia to reach the conclusion it was doing marginally better in AOTS. requiring that much effort just to catch up in a benchmark is not a good sign. When actual games come out next year with heavy usage of async, even a 980ti at max OC would lose or barely win.

Most of AMDs gains in ashes were from reduced overhead according to the devs. It was down to API efficiency, not async.

I think its sad people are buying the nvidia cards because of how I think about value etc, but their problem. Not like you immediately notice you got shafted anyway. You don't run benchmarks every day to compare and most of the nvidia owners wont realize they could have had better performance when their cards start to suck.
 
@littleg
My mention of DX12 is to put the current complaints in perspective.

However, without that, are AMD GPUs not competitive in DX11? Or just Fury X vs 980Ti in single configs? Because I already agreed Fury X doesn't compete well and thus should be cheaper in price.

The rest of the stack, AMD GPUs are more than competitive. I said at the start of the year when the R290 was still behind the 970, that by around this time, it would equal it. It has. Back then custom R290 running cool & quiet were going for $220 to $250, when 970s custom were going for $340+.

It doesn't change the fact despite being strong in performance and maturing better, AMD's marketshare is very poor, but to say AMD builds for the future only is wrong when their performance has been outstanding in the past years.

Did you forget that Hawaii was made to compete with Kepler, Titan and 780/Ti based GPUs? It did so very well and is now faster to a point where it competes with Maxwell. Will it surprise you if next year with more DX12 games, suddenly Hawaii is even faster than Maxwell? Such as 290X/390X beating 980 which is a much more expensive GPU.

As someone who bought R290/Xs, it has served me very well and will continue to do so until 14/16nm real next-gen arrive, especially if the gains in DX12 like in Fable/Ashes hold (or even be higher with more AC usage). That by definition is building a uarch that delivers excellent performance now and keep on improving in the future. What could you complaint about?
 
Last edited:
Fury X shouldn't be priced near custom 980Ti simply because it's not competitive on performance in single-GPU setups.

That much is clear and AMD should price it accordingly.

Look at the relative performance of the reference 980 ti vs the Fury X and how can you say that it's not competitive?
perfrel_3840_2160.png
 
Look at the relative performance of the reference 980 ti vs the Fury X and how can you say that it's not competitive?
perfrel_3840_2160.png

Dunno about other 980 Ti users, but I was getting better clocks than the clocks for the 980 Ti Waterforce day 1.

Core Clock: 1216mhz+
Mem Clock: 1801 Mhz

I'd still add 5-10% more to the Waterforce score personally, but then 3Dvagabond would call it cheating :/

EDIT: Ouch, I even paid less overall and got a Watercooled 980 Ti in the process.
 
it does matter for people who keep their cards for more than 2 years. 😀 there are plenty of pc builders on reddit who ask for a gaming machine for 4-5 years. in those cases, if they don't want to spend 600$ for a 980 ti, I always tell them to go for a amd card. there is a super high chance of 970 becoming crap within the year.
On that front, rather than making a dx12 claim blah blah blah I keep it simple.

Do you play the major gameworks franchises on launch day? If yes, gtx 970. If no, the r9 390 is the clear winner.

It's a scenario Flo chart to decide really. I thought of posting one earlier I probably should. For a working adult on a decent wage, the flowchart would easily work. Can't comment for those who have hard budgets for no reason and can't save a little more or those who are students or something and are stuck with whatever gift money they got.

Tbh I think getting a freesync or gsync monitor is the next major upgrade on every gamers list. The vendor lock does suck, but I'd rather have smooth game play.
 
Fury X shouldn't be priced near custom 980Ti simply because it's not competitive on performance in single-GPU setups.

That much is clear and AMD should price it accordingly.

Look at the relative performance of the reference 980 ti vs the Fury X and how can you say that it's not competitive?
perfrel_3840_2160.png

Silverforce SPECIFICALLY SAYS AIB COOLERS.

So your post proves his point. The AIB cooler card just wrecks Fury X. And yes both chips OC, but the 980Ti OCs far better than the Fury X. This isn't even a debate, it's not up for it. The 980TI is in a performance bracket where it stands alone.

INBF Pascal ends to be a dissaster thanks to Gameworks' motto: using well known, but really old tech. Yeah, GDDR5

So outdated tech in the 980Ti is not only beating the Fury X, but utterly trouncing it? It's not all about shoving the latest tech into a device. It's about making a device that actually performs.
 
Last edited:
Also, so supposedly Fury X will end up faster than the 980Ti according to many AMD fans.

So where are you guys with your Fury X? I've seen tons of people who are self reported AMD fans defend Fury X, and Fiji's release in general. I've seen ZERO of these people own a Fiji chip. In fact, the Fiji owners on here are few and far between. I think there are more 980Ti SLI users than Fury X owners.....
 
I was one of the few who claimed AT's Async Compute article was wrong, that Maxwell does not support 32 graphics + compute queues, just 1 graphics OR 32 compute and GCN doesn't just have 1 graphics + 8 compute, but in fact, 1 graphics + 64 compute. They still haven't corrected their misleading article.

Might I request a source for this claim?
 
On that front, rather than making a dx12 claim blah blah blah I keep it simple.

Do you play the major gameworks franchises on launch day? If yes, gtx 970. If no, the r9 390 is the clear winner.

It's a scenario Flo chart to decide really. I thought of posting one earlier I probably should. For a working adult on a decent wage, the flowchart would easily work. Can't comment for those who have hard budgets for no reason and can't save a little more or those who are students or something and are stuck with whatever gift money they got.

Tbh I think getting a freesync or gsync monitor is the next major upgrade on every gamers list. The vendor lock does suck, but I'd rather have smooth game play.
I look at kepler and gameworks, and than at your post and thinking, nah. it is either 980 ti with a healthy oc or AMD for everything lower. :thumbsup:
 
I look at kepler and gameworks, and than at your post and thinking, nah. it is either 980 ti with a healthy oc or AMD for everything lower.
Haha I may make that change but until we have more reference points than just kepler I can't if maxwell and kepler tank compared to pascal for no reason, then I'll revise that recommendation.

We can't assume maxwell and pascal will tank once new gpus are released because of gameworks just because of one instance kepler.
 
Haha I may make that change but until we have more reference points than just kepler I can't if maxwell and kepler tank compared to pascal for no reason, then I'll revise that recommendation.

We can't assume maxwell and pascal will tank once new gpus are released because of gameworks just because of one instance kepler.
yes we can. because my argument is the only one back up by precedent 🙂 not purely speculation or assumption. it is up to NV to change my mind, not the other way around. until they do, I stand by my choices in advising others. 😀
 
If the 3.5gb of VRAM was a big deal, Radeon would have not released the Fury series with 4GB.

And also, it is total speculation that the 970 will be obsolete...especially at 1080p. At 1440p, yeah I can see some issues down the road, but even then the extra 500mb of the 980 and Fury cards won't matter that much. Also, a 970 can overclock to almost 980 performance, and can be had for $299. It uses less power and dumps less heat into your case.

The 390 is a good card, and so is the 390X, but while you are entitled to your opinion, making out like the 970 and 980 are terrible cards and should be avoided at all costs is bad advice given the sales that are going on now. The 970 @ 1080p will be more than powerful for some time to come, and the 980 @ 1440p will hold it's own as well. The 980Ti will fare better, of course, but given the price and that a 1060 or 1070 will match it's performance next year....you have to buy with notion that your $600 will be cut in half in one year.
 
If the 3.5gb of VRAM was a big deal, Radeon would have not released the Fury series with 4GB.

And also, it is total speculation that the 970 will be obsolete...especially at 1080p. At 1440p, yeah I can see some issues down the road, but even then the extra 500mb of the 980 and Fury cards won't matter that much. Also, a 970 can overclock to almost 980 performance, and can be had for $299. It uses less power and dumps less heat into your case.

The 390 is a good card, and so is the 390X, but while you are entitled to your opinion, making out like the 970 and 980 are terrible cards and should be avoided at all costs is bad advice given the sales that are going on now. The 970 @ 1080p will be more than powerful for some time to come, and the 980 @ 1440p will hold it's own as well. The 980Ti will fare better, of course, but given the price and that a 1060 or 1070 will match it's performance next year....you have to buy with notion that your $600 will be cut in half in one year.

What do you have to say about this review?
https://www.youtube.com/watch?v=k9cKZiJw6Pk

This isn't done with the latest Crimson drivers either where AMD would have even better performance. The GTX 970 certainly does better in review suites with gameworks games. If those are games you play, GTX 970 may be your GPU. I suggest checking HardOCP reviews to see what the settings the GTX 970 would be on for gameworks games vs the R9 390 to have palyable performance as HardOCP reviews use a lot of Gameworks games.

If you don't play a lot of gameworks games, then the R9 390 is clearly your better choice hands down.

But either GPU really isn't that far apart. Which is why I think the HardOCP reviews really are so crucial, and I wish HardOCP would expand their review capacity to show more games and cards and review more depth in general.

For example:
1447932227T8cfKzsD56_4_1.gif

This is so critical for me. If I go from 30 FPS to 45 FPS in a graph that's great but I'm not going to play at those settings if that happens I'll try to get to 60 FPS. So a review like this is very useful. It lets me know that the GTX 960, R9 380x, and GTX 970 are three different tiers of cards in this game. I can then check that against Nvidia's graphics review of the game (or another source if there is one), to see whether it's worth upgrading for that extra level of graphical fidelity.

It's too bad they don't use more cards and do more reviews.
 
This is why we as gamers and enthusiasts should welcome competition.

No one can afford to compete with Nvidia though, have not for a LONG time. You can't magically start a company to compete with Nvidia in this day and age. Its to risky of a investment. They got all the talent, the patents, the know-how.
 
Gtx 950 for $129 sounds really good. Too bad these price cuts are probably US only. Last i checked gtx950 is selling in my country for $220 onwards. 750ti can be had for $150. These inflated prices are due to weak currency and import duties. Will have to stick with my 3 yr old hd7750 which i had bought for $130.
 
What do you have to say about this review?
https://www.youtube.com/watch?v=k9cKZiJw6Pk


But either GPU really isn't that far apart.

I've seen that JayZ review over and over. You have to remember that those 970s, even though OC models by vendor, were not overclocked to their full potential. You have to do that as a review site to remain credibility because you have to test the cards as shipped. If the cards are overclocked, then you have to have to state it explicitly. Jay just did an overclock on the Fury with the new drivers/tweaks and got nada from it. Same goes for 390/390x. With the 970, you can bank on a 1450/1500 core overclock and a VRAM overclock of 7800/8000 which will put it within 10% or so of a 980 (see H's review of the ASUS 970 Strix). Same goes for a 980 (1450/1500_7800/8000).

As stated, both the 390 and 390X are good cards, and both are very close in performance with the 970/980. With the ASUS 970 Strix OC @ 299 and the MSI 980 Gaming @ $440...both of these cards hold solid value in comparison to the 390 and 390X. Both the 970 and 980 overclock better which will give you a slight edge in performance, but neither still make mockeries of the 390/390X....so picking which card you get really just comes down to brand preference (and perhaps power efficiency/heat so forth and so one) more than anything else. However, neither card is going to become obsolete in a year or two relatively speaking although Pascal is probably going to be a monster. The reason Maxwell has made better gains via driver thank Kepler is because Maxwell is new architecture and it is better than Kepler (though Kepler is not bad). Radeon just spit out the same card for the most part which is why the 290 and 290X aftermarket cards are such good deals if you can find one that has not been Bitcoined to near death).
 
Last edited:
Silverforce SPECIFICALLY SAYS AIB COOLERS.

So your post proves his point. The AIB cooler card just wrecks Fury X. And yes both chips OC, but the 980Ti OCs far better than the Fury X. This isn't even a debate, it's not up for it. The 980TI is in a performance bracket where it stands alone.

From the review
With a retail price of $720, it is more affordable than competing products, like the MSI GTX 980 Ti Lightning.
So, it's not the same price. That doesn't negate the point that they are competitive. If you want to point out the fact that O/C'd cards are faster, that of course is fine. Anyone who says someone has to be an idiot, or borderline retarded, or some of the other attacks I've read is just mislead or insincere (and likely insecure too seeing as how they feel the need to put others down to justify their own decisions).

Just as a side note. That card is already OOS @ Newegg. So it's likely another ringer card where they have a few chips stable at those clocks under water and that's it. EVGA has been doing stuff like that forever and it works out well for them.

Also, so supposedly Fury X will end up faster than the 980Ti according to many AMD fans.

So where are you guys with your Fury X? I've seen tons of people who are self reported AMD fans defend Fury X, and Fiji's release in general. I've seen ZERO of these people own a Fiji chip. In fact, the Fiji owners on here are few and far between. I think there are more 980Ti SLI users than Fury X owners.....

Why does it matter to you and why would you care what people buy? How much do you get for it? Because unless you are getting paid somehow where the money goes is irrelevant.
 
Last edited:
If the 3.5gb of VRAM was a big deal, Radeon would have not released the Fury series with 4GB.

The big deal is that nVidia blatantly lied and mislead the consumer with false claims to reviewers. People paid their money and then found out afterwards they hadn't bought the product that was pitched to them. It went on for months and they never corrected it. Not until they were caught did they say anything. Then not even an apology just that it was a communication mistake internally. If you have an issue contact the retailer you bought it from. If they won't help you contact the AIB. Don't bother calling nVidia, the perpetrators of the misinformation, because they won't do squat for you. And it wasn't just VRAM either. They just blatantly misrepresented the specs. So, even though you don't seem to think it's a big deal, nVidia obviously disagrees and are willing to use false marketing claims to hide it.
 
Back
Top