[TT]AMD continues to cut spending on R&D, down 40% in the last five years

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
http://www.tweaktown.com/news/47107/amd-continues-cut-spending-down-40-last-five-years/index.html

"...Furthermore, AMD has been scaling back R&D spending over the last five years, with Pacific Crest analyst Mike McConnell chiming in, with the following: "When I talk to investors about AMD, there's some concern - I mean, we've seen a decline by close to 40% versus levels we were at in the beginning of the decade". AMD CTO Mark Papermaster has said that the PC market is shrinking, and that AMD is putting less R&D effort into that part of the business...."
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Could have sworn this thread had some posts when I last looked at it?

I can only imagine how much AMD has to spend on GPU R&D.

GCN being in consoles tells me it could probably around at least the length of the current consoles.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
Could have sworn this thread had some posts when I last looked at it?

I can only imagine how much AMD has to spend on GPU R&D.

GCN being in consoles tells me it could probably around at least the length of the current consoles.

I believe you are talking about my other thread rail :D
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Could have sworn this thread had some posts when I last looked at it?

I can only imagine how much AMD has to spend on GPU R&D.

Mathematically speaking, I've been saying it for a long time now -- it should be impossible for AMD to beat either NV or Intel in any of their businesses since AMD spends less on R&D and has less resources than either of these firms. Therefore, there should be no possibility at all for AMD to make a better product than either Intel or NV unless they actually screw up (aka FX5000 series, Pentium 4, etc.).

GCN being in consoles tells me it could probably around at least the length of the current consoles.

Ya and how is that going to work for them? Sounds like a disaster waiting to happen. They had to use HBM and they still can't touch Maxwell's perf/watt with GDDR5. Next gen (2016-2017), NV will have an all-new architecture (we can consider it 3rd successive new architecture: Kepler --> Maxwell --> Pascal in the same period AMD continues to use GCN (!), HBM2 and a node shrink. AMD already used up the HBM advantage. Even if we believe AMD's marketing hype of 2x the perf/watt over existing GCN 1.0-1.2 parts, if NV also doubles the perf/watt from Maxwell, once again AMD is behind.

Since AMD doesn't have a strong professional graphics division, it's going to be far harder for them to design a 550mm2 16nm HBM2 flagship card. Once again NV should have a die size advantage here. Historically speaking, there are more generations with good overclocking NV cards than there are with ATI/AMD cards. I would give overclocking headroom to NV as well because even since GeForce 8, all their generations overclock well, some spectacularly like Maxwell or GTX460. In the last 8 AMD generations from HD2000/3000/4000/5000/6000/7000/R9 200/300, only HD7000 had stellar overclockers.

Since Lisa Su has abandoned the price/performance strategy, I would give yet another advantage to NV since at the same price and similar performance, most consumers will pick NV anyway.

I think Pascal generation could be NV's 9700Pro moment. NV was able to win with a cut-down 980Ti and with just using GDDR5, which means NV wasn't even pushed to the limits as they could have easily released a 1300mhz 3072 shader 980Ti Black/Platinum Edition/Ultra. I do not see AMD competing well next gen in the high-end GPU space. Since Lisa Su isn't interested in pricing AMD's flagship cards at $399-449, I do not see how AMD will have any chance to beat GP100.
 

atticus14

Member
Apr 11, 2010
174
1
81
GCN being in consoles tells me it could probably around at least the length of the current consoles.


I don't think they are tied to GCN for the console's sake, their job I would think is pretty much done, just need to pump out the chips+APUs and change to 14nm eventually. IMO they will want a new, more competitive arch to sell to sony and MS for the next gen. Despite being different architectures you would think backwards compatibility would still be way easier than in the past, if they even wanted that.
 
Feb 19, 2009
10,457
10
76
I think Pascal generation could be NV's 9700Pro moment. NV was able to win with a cut-down 980Ti and with just using GDDR5, which means NV wasn't even pushed to the limits as they could have easily released a 1300mhz 3072 shader 980Ti Black/Platinum Edition/Ultra. I do not see AMD competing well next gen in the high-end GPU space. Since Lisa Su isn't interested in pricing AMD's flagship cards at $399-449, I do not see how AMD will have any chance to beat GP100.

Too many doom and gloomers, remember the months leading to Fury X? Many here were claiming a 4000 SP GCN GPU would be 400W+ monster (esp the Water Cooler, power hungry so it NEEDS water! lol.. Asus Fury on air, 213W...). That AMD could never be competitive on performance etc.

When was the last time AMD's generation beat NV's generation uarch at the high end? Never.

Now Fury X is matching 980Ti at 4k, all it takes is a few newer AMD-favorable games to arrive (lately its all GameWorks, don't forget), AMD's GE titles are incoming. Then DX12...

But especially for the ultra high end, Fury X CF > 980Ti SLI. In raw performance and in frame times. At stock, faster than a factory OC 980Ti SLI that nearly hit 1.4ghz boost, with 22% better frame smoothness overall. o_O

http://www.techspot.com/review/1033-gtx-980-ti-sli-r9-fury-x-crossfire/ (Note Witcher 3 has HairWorks on, it would be a bigger win for Fury X if GameWorks features were disabled).

When was the last time AMD managed to beat NV? This it the first time. Some people will say "meh, 4K doesn't matter" or "multi-GPU doesn't matter".. but goalposts, they can move it, I don't care. At the TOP, AMD is faster, smoother and in the DX11 era where their hardware is running crippled. In a year's time, you watch as Fury X smacks the 980Ti silly.

What they accomplish with such a limited R&D budget is outstanding.

As for your doom & gloom that AMD can't compete with Pascal. Don't be daft, AMD with each generation has gotten closer to NV on performance**, to finally beating them at the top. Next gen will continue the trend, as they have more HBM experience, HBM2 + Artic Island will shine.

http://www.kitguru.net/components/g...gpus-for-2016-greenland-baffin-and-ellesmere/

Also if there's any truth to the Hynix HMB2 exclusivity for AMD, you can look forward to a demolition of Pascal + GDDR5.

** 5870 vs 480, 6970 vs 580, 7970 vs 680, R290X vs 780Ti, note the performance gap shrinks. The R290X was slower than 780Ti on release at all resolutions. Fury X at least is competitive at 4K, albeit single GPUs aren't playable at 4K, so the real 4K battle always comes down to multi-GPU.
 
Last edited:
Aug 20, 2015
60
38
61
Too many doom and gloomers, remember the months leading to Fury X? Many here were claiming a 4000 SP GCN GPU would be 400W+ monster (esp the Water Cooler, power hungry so it NEEDS water! lol.. Asus Fury on air, 213W...). That AMD could never be competitive on performance etc.

When was the last time AMD's generation beat NV's generation uarch at the high end? Never.

Now Fury X is matching 980Ti at 4k, all it takes is a few newer AMD-favorable games to arrive (lately its all GameWorks, don't forget), AMD's GE titles are incoming. Then DX12...

But especially for the ultra high end, Fury X CF > 980Ti SLI. In raw performance and in frame times. At stock, faster than a factory OC 980Ti SLI that nearly hit 1.4ghz boost, with 22% better frame smoothness overall. o_O

http://www.techspot.com/review/1033-gtx-980-ti-sli-r9-fury-x-crossfire/ (Note Witcher 3 has HairWorks on, it would be a bigger win for Fury X if GameWorks features were disabled).

When was the last time AMD managed to beat NV? This it the first time. Some people will say "meh, 4K doesn't matter" or "multi-GPU doesn't matter".. but goalposts, they can move it, I don't care. At the TOP, AMD is faster, smoother and in the DX11 era where their hardware is running crippled. In a year's time, you watch as Fury X smacks the 980Ti silly.

What they accomplish with such a limited R&D budget is outstanding.

As for your doom & gloom that AMD can't compete with Pascal. Don't be daft, AMD with each generation has gotten closer to NV on performance**, to finally beating them at the top. Next gen will continue the trend, as they have more HBM experience, HBM2 + Artic Island will shine.

http://www.kitguru.net/components/g...gpus-for-2016-greenland-baffin-and-ellesmere/

Also if there's any truth to the Hynix HMB2 exclusivity for AMD, you can look forward to a demolition of Pascal + GDDR5.

** 5870 vs 480, 6970 vs 580, 7970 vs 680, R290X vs 780Ti, note the performance gap shrinks. The R290X was slower than 780Ti on release at all resolutions. Fury X at least is competitive at 4K, albeit single GPUs aren't playable at 4K, so the real 4K battle always comes down to multi-GPU.


Please tell me this was sarcasm? It was, right? Because otherwise... :confused:

From your own link:

Everything changes when overclocking comes into play. The GTX 980 Ti offers loads of overclocking headroom where as the Radeon R9 Fury X offers almost none.

As a result, when comparing average frame rates once overclocked, the GTX 980 Ti graphics cards became 11% faster on average. Games where the GTX 980 Ti SLI cards were previously slower, such as Battlefield 4 and Watch Dogs, now favored the green team.

That isn't entirely surprising as overclocking saw SLI performance boosted by 15% on average, whereas the Crossfire configuration gained just a percent or two. The frame time data now also favored Nvidia by 5%.

When it comes to power consumption there were times when the R9 Fury X Crossfire system consumed over 700 watts whereas the GTX 980 Ti SLI setup never broke 600 watts, at least before any overclocking took place. That said, even when heavily overclocked, the GTX 980 Ti SLI cards still consumed considerably less than the Fury X Crossfire cards.

If we go back and look at the average frame rate performance of each game while also taking note of the minimum frame rates we see that the GTX 980 Ti SLI setup delivered very playable performance in seven of the 10 games, the Fury X Crossfire cards on the other hand provided what we consider to be very playable performance in six of the 10 games while remaining playable in the rest.

Gamers wanting to play at 4K will be happy with either setup overall, but we feel Nvidia offers a more consistent gaming experience while allowing for an additional 15% performance bump through overclocking. Normally we don't place so much emphasis on overclocking, but we feel those seeking an enthusiast multi-GPU setup are probably able and willing to enjoy the benefits of overclocking.


And this is the Fury X's best-case scenario (OC Xfire at 4K)... but it still gets trounced by crippled GM200 (not even full), on air (not at max frequency), using GDDR5 (not HBM), with less enabled transistors (<8B vs 8.9B), while sucking down less power, having more VRAM, and generally having quicker SLI support from Nvidia. Go back to considering reality, and there also plenty of enthusiast single GPU and/or sub-4K setups where the 980 Ti's lead only grows. Again, crippled GM200, not even full. AMD didn't win an overall performance lead here, they're being mercilessly crushed by the reject GM200's Nvidia farted out months after they were ready and that's the unbiased truth. Just a few charts throwing a wrench in the AMD beating Nvidia at the high-end thing (again, from your link; did you read it?):

Metro_01.png


TR_01.png


BF4_01.png


WD_02.png


Hitman_01.png


GTAV_01.png


Civ_01.png







P.S. HBM's a JEDEC standard and Samsung just confirmed production of it. And AMD's die sizes shot up significantly between a few years ago and now; plus, they swapped places with Nvidia; the latter having taken AMD's efficiency advantage lead from them. GCN's not the same beautiful tech AMD had at their disposal against Fermi; it's much less efficient in relative perf/mm^2 (against Nvidia) and in perf/watt both. Both Fiji and GM200 are at TSMC's reticle limit (~600 mm^2) with the former being a denser design with more transistors and die size/power savings from HBM and AIO cooling, yet it still doesn't win.

If you want to argue about it being impressive given AMD's shoestring budget, that's one thing. To argue it's beating GM200 is another, very laughable thing. Commence goal-post moving, but there it is. Fiji has a few scenarios where it pulls ahead in highly niche situations, but there's nothing conclusive in that review of said niche situation cementing an overall lead in its favor. It's more the opposite, as summed up by the article writer.
 
Last edited:

tential

Diamond Member
May 13, 2008
7,348
642
121
I'm still happily sitting with my HD7950 RS, I removed all my drive bays from my R4 case, and my temps dropped like a rock and I've been able to make a LOT more games playable/mods work because of the OCs I've been able to hit and it's not like they're above average or average but it's just a huge boost from where the card shipped at.

I do wish Gsync would make it to more monitors. It's because of the Gsync availability that I'm going to want a Fury.

All of the games I want to play will work fine on a freesync monitor 4k with Fury, and when I'm ready to play more modern games, I'll upgrade.

My problem is I'm worried if AMD doesn't significantly improve next gen, I don't want to be stuck with a 4K freesync monitor that I only got for the freesync capability.
 
Feb 19, 2009
10,457
10
76
Stock Fury X CF vs SLI OC 980Ti model.

The average frame rate data saw the Fury X cards come out 4% ahead of the GTX 980 Tis based on the 10 games that we tested at 4K.

Now for the interesting part, typically we expect Nvidia to have the edge when looking at frame time (99th percentile) performance, but this wasn't the case here. The R9 Fury X Crossfire cards were on average 22% faster when comparing the 99th percentile data.

Notice I did not bring OC into the equation, comparing stock results across the generation. Stock v stock, 5870 was far behind 480, as was 6970, as was R290X vs 980Ti.

Now we're getting Stock Fury X destroying STOCK ref 980Ti and beating OC 980Ti models.

That is an improvement. Give credit where its due.

Against a max OC 980Ti, 15% OC on top of the OC model, putting it at >1.5ghz.

As a result, when comparing average frame rates once overclocked, the GTX 980 Ti graphics cards became 11% faster on average.

11% faster than gimped (5%) OC Fury X. Get TPU's OC numbers with vcore. >1.2ghz Fury X with vram will equalize that 11% delta.

memory.gif


So no, max OC SLI 980Ti isn't faster than max OC Fury X CF. The techspot article is a gimped OC Fury X, 5%.

Also, reliance on max OC to beat something is NOT a good result. What if you don't have silicon lottery luck? What if your 980Ti can't reach stable daily use >1.5ghz?

Out of the box results, CF Fury X is supreme over even OC custom 980Ti SLI, 22% smoother frame times to boot! That's a major progress from AMD.

Now, wanna compare the ULTRA enthusiast? How about 3-4 GPUs? Fury X destroys Titan X/980Ti.

https://www.youtube.com/watch?v=d8hKhlbrhQ4

https://www.youtube.com/watch?v=G1EoFWrD3lE

When was the last time AMD had an ULTRA setup victory over NV? Never. This gen is the first time they've done it. Next-gen, I fully expect them to continue this progress and take the overall lead, clearly, single GPU and multi-GPU as we enter the DX12 era next year.
 
Last edited:

rgallant

Golden Member
Apr 14, 2007
1,361
11
81
@ghost


what's the clock to clock look like?
maybe if amd did a respin for higher clock's ,where fury[x] does 1500 mhz ,what would the mighty 980ti look like then??
 
Last edited:
Aug 20, 2015
60
38
61
Stock Fury X CF vs SLI OC 980Ti model.



Notice I did not bring OC into the equation, comparing stock results across the generation. Stock v stock, 5870 was far behind 480, as was 6970, as was R290X vs 980Ti.
Stock clocks mean nothing. Any company can take a part, throw a water block on it, and overclock it to nearly practical limits as a stock part. Which AMD did. Are we going to compare FX 9590s to stock Sandy Bridges now? Stock clocks are as meaningless there as they are here. Some microprocessors have 30-40% OC headroom or more. Others have 10% or less. It's not the stock anyone should be looking at just because Nvidia were so non-threatened they could use a horrible blower cooler and woefully-underclocked GM200 reject while letting AIBs figure out the rest.

Maxwell overclocks extremely well with minimal voltage increases and it's easily something your typical ultra high-end enthusiast will do. To pretend stock speeds mean anything in light of that would be as criminal as comparing CPUs to 2500Ks at 3.3-3.5 GHz or whatever they come with stock. Or better yet, it would be like pretending the 290X's reference low clocks and throttling meant anything about Hawaii's capabilities.

Now we're getting Stock Fury X destroying STOCK ref 980Ti and beating OC 980Ti models.
No, your posted link already established that a Fury X's max OC without going nuclear does not beat OC 980 Ti models even at their best scenario. Which, again, are crippled GM200 chips still on air and reasonable voltages themselves.

11% faster than gimped (5%) OC Fury X. Get TPU's OC numbers with vcore. >1.2ghz Fury X with vram will equalize that 11% delta.

memory.gif


So no, max OC SLI 980Ti isn't faster than max OC Fury X CF. The techspot article is a gimped OC Fury X, 5%.
Gimped OC? That's the Fury X's max reasonable overclock, as per their sample. Weren't you saying, about two posts up, that Fury X 400W estimates were ridiculous?

power.gif



Because that's exactly what needs to happen for the Fury X to even get to the OC level you're discussing to somewhat match a hugely more-efficient 980 Ti OC.

You know the last time AMD's best chip could match Nvidia's gimped high-end chip was the 6970 versus the 570. Difference back then was that AMD had a much smaller and more efficient die still running on air and using comparable technologies outside the GPU (like GGDR5) whereas this time, they're literally pushing TSMC's manufacturing limits and have a few advantages over Nvidia (HBM, XDMA, AIO); at least one of which is going to evaporate next time around:

overall-perf.gif



I'm not sure why you keep ignoring that fact, but TSMC cannot manufacture larger GPUs. Fiji was literally their best effort with the best cooler they could find and their cutting-edge HBM advantage. As for Xfire, that's XDMA's doing most likely and still not conclusively superior to, for ad nauseum, Nvidia's gimped GM200 chip on air. AMD aren't gaining on Nvidia; they're desperately pulling out all the stops they historically refused to do just to try and somewhat keep up. Like the big die, which seriously cannot be understated. They've done better before with less.
 
Last edited:

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,103
126
Maybe AMD should drop CPU business and focus on graphics like NVIDIA?

It can't compete with INTEL in CPU anyway.
 
Feb 19, 2009
10,457
10
76
No, your posted link already established that a Fury X's max OC without going nuclear does not beat OC 980 Ti models even at their best scenario.

Stock Fury X CF already beats SLI OC 980Ti models which runs with a 15% OC over reference, which many sites found boosted to 1.2ghz (Ref 980Ti boost higher than Titan X). 22% faster frametimes than 15% OC 980Ti. o_O

You can slice it however you want, but a 30% OC 980Ti SLI at >1.5ghz boost barely beats a 5% OC Fury X CF is a great showing, for the Fury X.

Look at the clocks with a small +24mV

scaling.gif


Performance gain at 1160/560 is already decent, twice that of the Techspot OC.
memory.gif


Power at +24mV is reasonable.
power.gif


If anything, it shows a max OC 980Ti SLI being on-par with a minor OC Fury X CF (Don't even need >1.2ghz, 1160/560mhz is enough to tie it). -_-

You haven't even bothered to talk about 3-4 GPU, even Titan X, the full GM200 is gimped compared to Fury X in top configs. o_O So again, first time in a LONG time for AMD/ATI to reclaim the performance crown at the top. Halo.

Even more amazing when there's so many GameWorks titles of late (which Techspot tested with HairWorks ON) and AMD's supposedly gimped DX11 drivers. What would the results look like with the next-wave of AMD GE titles to the mix? Or a year from now with a bunch of DX12 games? Hmm. Looks to me like AMD with their limited budget is punching well above their weight.
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
It is now obvious that AMD invested the least amount of money possible at 28nm (in GPUs especially) once they knew 20nm was unusable and 14nm/16nm FINFET would take a lot more time to be available. The GCN architecture in its current form is not efficient in terms of perf/watt, perf/sq mm and perf/transistor. Fury with HBM advantage is losing to GM200 with GDDR5. Next gen we will see both Nvidia and AMD with FINFET GPUs with HBM2. Nvidia will improve upon an already impressive Maxwell. AMD meanwhile has a massive multi generation gap to overcome. Right now I am not optimistic about AMD's chances in both CPU and GPU front- Skylake vs Zen and GCN2 vs Pascal. The rate at which R&D has been cut is appalling. AMD's computing and graphics revenue is falling with no bottom in sight and the situation is pathetic. AMD's computing and graphics revenue in Q2 2015 is less than their graphics revenue in Q2 2010. Let that sink in for a moment. This is a company which has made mistake after mistake and continues to fall further behind its primary competitors. Anyway the situation is so bad that AMD won't get another chance. If Zen and GCN2 fail to compete then AMD is dead. plain and simple.
 
Feb 19, 2009
10,457
10
76
ok, so GCN is here for the next 5 years, time to buy a GCN card

Is what I did, R290s at launch and recently R290X cool & quiet, faster than gimped 970 for less $, I'm betting on it that DX12 is going to make GCN crush Maxwell & destroy Kepler. Should be fine until 14nm ff next-gen.
 
Mar 10, 2006
11,715
2,012
126
I think Pascal generation could be NV's 9700Pro moment. NV was able to win with a cut-down 980Ti and with just using GDDR5, which means NV wasn't even pushed to the limits as they could have easily released a 1300mhz 3072 shader 980Ti Black/Platinum Edition/Ultra. I do not see AMD competing well next gen in the high-end GPU space. Since Lisa Su isn't interested in pricing AMD's flagship cards at $399-449, I do not see how AMD will have any chance to beat GP100.

I think AMD's priorities lie in chasing the server market rather than the gaming GPU market.
 
Mar 10, 2006
11,715
2,012
126
It is now obvious that AMD invested the least amount of money possible at 28nm (in GPUs especially) once they knew 20nm was unusable and 14nm/16nm FINFET would take a lot more time to be available. The GCN architecture in its current form is not efficient in terms of perf/watt, perf/sq mm and perf/transistor. Fury with HBM advantage is losing to GM200 with GDDR5. Next gen we will see both Nvidia and AMD with FINFET GPUs with HBM2. Nvidia will improve upon an already impressive Maxwell. AMD meanwhile has a massive multi generation gap to overcome. Right now I am not optimistic about AMD's chances in both CPU and GPU front- Skylake vs Zen and GCN2 vs Pascal. The rate at which R&D has been cut is appalling. AMD's computing and graphics revenue is falling with no bottom in sight and the situation is pathetic. AMD's computing and graphics revenue in Q2 2015 is less than their graphics revenue in Q2 2010. Let that sink in for a moment. This is a company which has made mistake after mistake and continues to fall further behind its primary competitors. Anyway the situation is so bad that AMD won't get another chance. If Zen and GCN2 fail to compete then AMD is dead. plain and simple.

Some cold hard truth right here. Spot on. :thumbsup:
 
Aug 20, 2015
60
38
61
You're not even addressing what's being written nor finding some actual numbers to prove anything you're saying. The numbers you did actually link/reference aren't supporting your points at all. The linked article gave the Fury X the best OC they could, and the 980 Tis, and they compared them only to find the Fury X lacking. Now you're ironically goalpost moving to three-four SLI/Xfire setups where GM200, being on air, will naturally be heavily limited thermally and pretending that has anything to do with Nvidia's GPU tech (and not, you know, the AIO). And pretending the stock speeds are anything but an arbitrary starting point (which literally means nothing to ultra enthusiasts) When AMD had the VRAM advantage, you could force unrealistic scenarios with a 6970 versus a 580 (let alone 570) too:

1294739584IQDHAMEkyP_4_7.gif




They don't even list an OC speed for reference, yet you've concluded their Fury X is at a gimped OC. You're extrapolating other numbers from a specific overclocking scenario, filtering out the horrendous parts (like a 400W+ single GPU result), and nebulously using imaginary numbers to vaguely hand-wave away the Fury X losing in the actual review. If you want to prove your points, by all means find the review article's clockspeeds and mathematically adjust them with the OC findings or find an actual review showing what you mean.

You insist AMD are catching up and doing something unprecedented while ignoring all the advantages (including the significantly extra wattage headroom that will come with any OC above the review article's) they have and that they're still being compared to GM200 rejects.

The "catching up" came alongside record-high die sizes and thermals for AMD. It is a fact that they've gotten as close to Nvidia before with much more efficient tech. I'm not sure why you expect AMD will push beyond Nvidia next time around based on this "trend" considering AMD can't make a bigger die than Nvidia's by any significant margin (considering the latter has been consistently getting close to TSMC's reticle limit for years) and they certainly aren't going to have the HBM advantage next year. They're definitely not having an R & D advantage. How, exactly, do you expect them to surpass Nvidia? More cherrypicking of thermally-limited scenarios? Any ultra enthusiast buying at that level uses waterblocks, assuming they're even insane enough to try doing tri/quad GPU setups.

You're wrong about AMD's relative positioning versus in the past; including the 5870 and 6970 not being competitive in the exact way you insist the Fury X is. The only actual evidence you can use to support your points are when the GM200 products are suffocating in their own heat. What happens when you put them on water blocks and overclock the living daylights out of them?
 
Last edited:
Feb 19, 2009
10,457
10
76
In Summary:

The article I linked is the latest review, latest drivers. They listed the % OC for both. 5% for Fury X. 15% OC model above Ref. Then they OC it a 15% further to get its max OC.

Their data shows a stock Fury X CF is faster than 15% OC model 980TI SLI, by 4%, not much, but 22% faster frame times overall. A big win.

Their data shows a 15+15% OC 980Ti SLI beating a 5% OC Fury X CF by 11% average frame rate, and 5% faster frame times.

I showed you TPU's data, with a small +24mV, they can do a decent OC without blowing the power up, that would bring up the numbers.

Also note they tested with HairWorks on in Witcher 3.

As said, it's amazing what AMD can manage with their limited budget, in DX11 where AMD performs worse than DX12, with GameWorks titles running GW features, and Fury X still stands tall against 980Ti at 4K and eventually wins when we're entering Quad-GPU setups, the home of the ultra enthusiast.

This is actually the best case scenario for NV, reference vs custom. Compared to reference 980Ti or Titan X, it's not even a close contest at 4K CF vs SLI.

So my statement about AMD progressing stands.

ps. Some of you claimed when CF Fury X destroyed SLI 980Ti, that it wasn't fair, it was a reference vs reference comparison!

KnkWGXD.jpg


So now we have OC models of 980Ti SLI v reference Fury X (including PCPER using a hybrid 980Ti!) and it loses, badly in frametimes, and it's still not fair!

It's not manually MAX OC... so I show u what can be achieve with a small vcore bump on Fury X, you say its not fair!! GM200 is suffering in the heat, its not on water.. you want max vcore OC on water? Why stop there? Why not max vcore OC on Liquid Nitrogen? Ok, NV wins there, you can have that.
 
Last edited:
Aug 20, 2015
60
38
61
In Summary:

The article I linked is the latest review, latest drivers. They listed the % OC for both. 5% for Fury X. 15% OC model above Ref. Then they OC it a 15% further to get its max OC.

Their data shows a stock Fury X CF is faster than 15% OC model 980TI SLI, by 4%, not much, but 22% faster frame times overall. A big win.

Their data shows a 15+15% OC 980Ti SLI beating a 5% OC Fury X CF by 11% average frame rate, and 5% faster frame times.

I showed you TPU's data, with a small +24mV, they can do a decent OC without blowing the power up, that would bring up the numbers.

Also note they tested with HairWorks on in Witcher 3.

As said, it's amazing what AMD can manage with their limited budget, in DX11 where AMD performs worse than DX12, with GameWorks titles running GW features, and Fury X still stands tall against 980Ti at 4K and eventually wins when we're entering Quad-GPU setups, the home of the ultra enthusiast.

This is actually the best case scenario for NV, reference vs custom. Compared to reference 980Ti or Titan X, it's not even a close contest at 4K CF vs SLI.

So my statement about AMD progressing stands.

And you still seem like you're under the impression stock speeds (even on factory-overclocked models) mean anything.

While the +24mV OC only pushes upward another ~5%.

And the 6970's VRAM gave it advantages only in extreme multi-GPU scenarios too.

And ignoring the heavy thermal throttling that naturally comes with stuffing 4 air-cooled 250-300W GPUs in a box (akin to the 6970's multi-GPU advantages).

All while the 6970, while giving Fermi just as much of a run for its money, did it with a far smaller die size and power usage whereas AMD have only gotten "closer" (read, stayed exactly where they were) by ballooning up the die size, throwing power efficiency out the window, being late to the competition, using their one-time HBM card, and being pit against reject GM200s on air (which are akin to the 570 the 6970 competed with).



And no, the best-case scenario for GM200 would be if you took the full chip, put it on water, gave it the Fury X stock clock treatment (near its limits), and compared in a single-GPU battle at 1440P or so; especially when you can only point to a thermally-crippled multi-GM200 setup to find any decisive win for the Fury X. Which is in its best-case scenario by virtue of being aggressively-clocked, in full (uncut form), and on an AIO in its best (multi-GPU scenario) aided by XDMA crossfire scaling.




There's nothing more to say. All you're doing is cherrypicking stock-clocked multi-GPU/thermal throttling scenarios with GM200 and goalpost-moving.
 
Last edited:
Feb 19, 2009
10,457
10
76
Indeed when the comparison was ref v ref, people complaint it wasn't fair, that you can get a better out of the box experience with custom 980Ti... it turns out Fury X still trounces it at 4K and the goal post has moved to manual max OC v gimped no vcore OC Fury X. Nice try.

AMD has definitely won the 4k halo crown, until NV decides to make a full GM200 custom cooled variant that is faster OUT OF THE BOX. We'll revisit it then if it happens.

We'll also revisit this in 2016 and see if my prediction on AMD widening the gap with DX12 will be true, Fury X will pwn 980Ti, even more than Hawaii pwning Kepler.

Peace.
 

5150Joker

Diamond Member
Feb 6, 2002
5,549
0
71
www.techinferno.com
Is what I did, R290s at launch and recently R290X cool & quiet, faster than gimped 970 for less $, I'm betting on it that DX12 is going to make GCN crush Maxwell & destroy Kepler. Should be fine until 14nm ff next-gen.

Haha DX 12? Are you basing all these hopes off Ashes of Singularity? If so, I think you're going to be in for a rude awakening when real DX 12 games do finally show up.


It is now obvious that AMD invested the least amount of money possible at 28nm (in GPUs especially) once they knew 20nm was unusable and 14nm/16nm FINFET would take a lot more time to be available. The GCN architecture in its current form is not efficient in terms of perf/watt, perf/sq mm and perf/transistor. Fury with HBM advantage is losing to GM200 with GDDR5. Next gen we will see both Nvidia and AMD with FINFET GPUs with HBM2. Nvidia will improve upon an already impressive Maxwell. AMD meanwhile has a massive multi generation gap to overcome. Right now I am not optimistic about AMD's chances in both CPU and GPU front- Skylake vs Zen and GCN2 vs Pascal. The rate at which R&D has been cut is appalling. AMD's computing and graphics revenue is falling with no bottom in sight and the situation is pathetic. AMD's computing and graphics revenue in Q2 2015 is less than their graphics revenue in Q2 2010. Let that sink in for a moment. This is a company which has made mistake after mistake and continues to fall further behind its primary competitors. Anyway the situation is so bad that AMD won't get another chance. If Zen and GCN2 fail to compete then AMD is dead. plain and simple.


This is the unfortunate reality of things for AMD. If they don't find someway to beat Pascal in terms of perf/watt, release and driver/developer support, they can kiss the next generation goodbye as well. NVIDIA right now is executing flawlessly and they aren't even pushing 100% as others pointed out, they're basically holding back. Knowing JHH, he might just go for the jugular next round and push AMD to the brink. As for Zen, if it is priced right and can come close to skylake in terms of performance and heat, it should do fine. Intel is hitting a wall with performance gains and die shrinks, its not like the old days where each successive die shrink brought huge gains in performance. There's a chance for AMD to create a powerful Zen based APU (maybe mid tier graphics) that eats up a few hundred watts for the desktop market. I think if they pursued that strategy, a LOT of people would buy a Zen just because they're too cheap or lazy to buy a GPU. If AMD took it a step further and marketed pre-made gaming PCs with OEMs featuring powerful Zen APUs, they could create a nice little niche for them to survive on.
 
Last edited:
Feb 19, 2009
10,457
10
76
Haha DX 12? Are you basing all these hopes off Ashes of Singularity? If so, I think you're going to be in for a rude awakening when real DX 12 games do finally show up.

Yep, because Oxide doesn't make real DX12 games... it looks like you swallowed the NV PR line, somehow, a game built ground up with Mantle/DX12, isn't a DX12 game make! Huh. You'll have better lucky claiming "its not representative cos its still in alpha" or "NV drivers aren't ready for dx12 yet".. etc. But to claim a showcase of DX12 tech in a DX12 game .. isn't a DX12 game?! Wow. Desperation. Almost as bad as NV lying about the MSAA bug & blaming Oxide.. when the bug is IN THEIR DRIVERS.

ps. What if I'm right about DX12 giving AMD a huge advantage, are you gonna come here and move the goalposts to some bizarre metric or you gonna MAN UP and give AMD credit where its due?
 
Last edited:
Aug 20, 2015
60
38
61
Indeed when the comparison was ref v ref, people complaint it wasn't fair, that you can get a better out of the box experience with custom 980Ti... it turns out Fury X still trounces it at 4K and the goal post has moved to manual max OC v gimped no vcore OC Fury X. Nice try.

AMD has definitely won the 4k halo crown, until NV decides to make a full GM200 custom cooled variant that is faster OUT OF THE BOX. We'll revisit it then if it happens.

We'll also revisit this in 2016 and see if my prediction on AMD widening the gap with DX12 will be true, Fury X will pwn 980Ti, even more than Hawaii pwning Kepler.

Peace.
I don't think you're getting it.

First off, yes, reference versus reference (including stock clocks on aftermarket models, when they're still so relatively conservative) is a horrendous way to compare microarchitectural development of microprocessors, the crux of your argument of AMD somehow being closer to Nvidia than they ever were. This is because the Fury X is aggressively clocked out of the box and simply doesn't have much more headroom left in it whereas Maxwell is different (including aftermarket models that still have more OC headroom without volt increases than Fiji). It's the same thing with all microprocessors, including Hawaii. I never implied otherwise, despite your accusations.

Your own linked review showed both cards in a dual-GPU configuration with max overclocks without skyrocketing voltage and the 980 Ti SLI came ahead by ~11%. You claim the Fury X was only overclocked by ~5%, but I checked the whole article and they never said by how much. It's safe to estimate it ended up being in the ~1125-1150 range like other Fury Xs without voltage control.

There's only another ~10% maximum left in the Fury X at all after that OC (including memory and core) regardless and it involves shooting voltage and power draw through the roof... just to theoretically match (not beat) those 980 Ti numbers from the review you linked while sucking down huge amounts of power relatively. Both architecturally and as an end-product, that's not a good thing for Fiji. That doesn't put AMD ahead of where they were architecturally (hence the thread's topic about R&D) relative to Nvidia at all. You could scale back on the voltage and clocks as you suggested, but then it would be losing. Alternatively, you could pump up Maxwell's volts and squeeze a little more juice out of it to keep the 980 Ti slightly ahead. The 6970 was superior in this regard; being smaller, cheaper to manufacture, and more-efficient, but just as comparably good in performance to the 980 Ti's predecessor (the 570) with some niche scenarios of it pulling ahead of even the 580 SLI (like the Fury X).

The 6970 was close to a 570, winning in niche multi-GPU scenarios as the Fury X versus the 980 Ti. But the 6970 was significantly smaller, cheaper, and less power-hungry than GF110 whereas the Fury X is the same size, is more expensive, and is more power-hungry than GM200. Architecturally, and for consumer products, that's relatively worse.


The entire premise of your argument is that AMD have pulled themselves further ahead when your own presented actual data (and not your dismissive statements with no basis in documented reality) coupled with 6970 performance and Fiji's significant manufacturing drawbacks and thermal signature to get there suggest the complete opposite; that AMD have regressed relative to Nvidia. The market's pricing also reflects that reality, as does the role-reversal since those days in power-efficiency and VRAM as well. Plus, Nvidia's release schedule has been significantly ahead of AMD's these past couple years in a way they weren't back when the 6970 was released.

The ONLY scenario you've presented where the Fury X takes any crown is when you're pitting it and its aggressively-clocked factory default against thermally-restricted reference GM200 chips. That proves nothing architecturally nor does it reflect how the ultra enthusiast market actually use their cards. Very few even go beyond dual-GPU, let alone with reference cooling or reference clocks. Even at the dual-GPU stage, enthusiasts don't run stock clocks which are also, again, a pointless form of measurement for relative microarchitectural development.



Cherry-picking thermally-crippled and conservatively-clocked results for one product simply doesn't support your assertion that AMD are relatively ahead in uarch development, period. And they don't support the assertion that AMD have taken a performance crown because of how ridiculously unrealistic it is for people to run their cards like that. But mostly, this is about microarchitectural development.




Actual information (those YOU listed) don't support your argument in the slightest, so you've changed it into pure conjecture about DX12 and cherry picking reference results; trying to justify it by presuming this is some green versus red mudslinging session and presuming that I've ever been a part of that. I specifically stated the 290X's reference cooler isn't a good way to measure Hawaii's microarchitectural or actual market capabilities as well, so please, read what's actually being said instead of projecting a fan war to posts that contradict yours with facts. There is no goal post moving on my end nor is there even an agenda rather than to discuss the facts.
 
Last edited: