JPR Q4 graphics marketshare

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

PPB

Golden Member
Jul 5, 2013
1,118
168
106
I see it as a matter of R&D expenditures in making different die configurations.

The 7000 series was mostly GCN 1.0 with just 1 GCN 1.1 SKU (7790). Competition from AMD in the dGPU space was good at that time, even with NV going full offense with their PR force exploiting the crossfire smoothness issues.

In the other hand, the 200 was only comprised of 2 new die configurations, one GCN 1.1 (Hawaii) and one GCN 1.2 (285), while the rest of the lineup was still rebrands of 7000 series. Here is when NV started this big swing in marketshare, filling more gaps with cut down dies than AMD (NV had 3 SKU variations per die at the top end than AMD which only sports 2 historically). NV didnt actually had a fresher lineup than AMD here, with only the GM106 being the totally new sku here (a move AMD makes quite frequently, with their 4770, their 7790 and their 285).

And then GM204 came in and left AMD entire lineup feeling dated. By itself GM204 isnt an impressive launch performance wise, but because the PR strategy that now favors perf-watt instead of absolute perf at the top end, and the shafting to the Kepler SKUs driver-wise, GM204 indeed looked great.

I think 3 mistakes are biting AMD hard here:

- Their uarch progression strategy just after Llano was not only taken from their CPU division, but also extremely affected by the APU release schedule. This made AMD's GPU division be dragged down by CPU's division philosophies regarding uarch progression and release schedules (remember the famous slide saying " yearly APU releases with the latest gpu tech for every new APU's igp release"?) and influenced this new GCN 1.x type of uarch progression (instead of big uarch changes in a 2-year cadence) ,to a new position where one can argue if the roles have reversed (CPU division following GPU's 2 yearly cadence) but in reality this is both the CPU space has stalled and the budget AMD has for R&D has diminished quite abruptly.
- Bad sales predictions left AMD with a HUGE inventory issue regarding Hawaii/Tahiti (and probably pitcarin too).
- Probably not predicting the process progression stall (28nm for what, 4 years now?) and sticking to their design philosphy of "each big uarch change comes with a new process node" when their big uarch change should have happened midway 28nm, not in a few months when 28nm finally seems to be on its way out.

Both 3 left AMD where it is today: dragged to release uarch updates in tiny bits, not only because that is what the APU schedule mandates, but also because their R&D today wont allow them to do a full GCN 1.1 lineup for 200 series, and a full 1.2/1.3 lineup for the 300 series. Today their release schedule fits a bigger company better (a company capable of designing 4 die variations of a same GCN updated generation each year). Sadly this isnt the case for AMD now. Designing new dies involves tons of money, so probably what would be best for AMD right now would be stick to NV's strategy of making big uarch changes ever y 2 or 3 years, tweak via drivers/bios the rebranded skus that will fill the holes in that bigger cadence, and make greater leaps in every metric.

AMD started to show its lesser R&D budget the moment they released such a crammed design as Hawaii. The worse thermals paid as trade off, added with the god awful reference design left them with a tainted image that still cant be reverted to this day (for how long have we seen users recommend 290/x Tri-X but still get dismissed because everyone can only remember, conviniently for NV's PR force, the loud and hot reference cooler?)
 

wand3r3r

Diamond Member
May 16, 2008
3,180
0
0
You're wrong, plain and simple, 4K H.264/HEVC decoding is done fully on the GM206's full fixed function decoder, it's not worthless for 4K video decoding and it will outlive Pitcairn and Tahiti when G-Sync monitors start using ASIC hardware instead of the FPGA currently used.

GTX 960 4GB cards will be out in March and neither Pitcairn nor Tahiti can handle 4K gaming either. And stop comparing to the 290/290X, that card is worthless with all the miners selling off their Hawaii cards once the mining bubble burst back in 2014.

http://www.channelregister.co.uk/2015/02/12/amd_stops_chip_shipments/



This is the only reason you can buy Hawaii dirt cheap, because it's not selling at all.

Without the extreme bias you would see that the 290 and 280x blow away the 960 in 4k gaming. Tahiti blows the doors off of the 960. 290's are dirt cheap and make the 960 look like the overpriced card ripoff it is for 2015.

Exactly how a video codec will help a gamer is beyond a little biased.

perfrel_3840.gif

Somehow you are trying to spin the 960 as superior to the 280x/290, although it's inferior for gaming...

[Not happening]

Member callouts and accusations of bias will not be tolerated here.

-Rvenger
 
Last edited by a moderator:
Feb 19, 2009
10,457
10
76
The reason I brought up consoles was to show you that Low Volume + High margin is not the only way to make money for AMD. To get back to my point you honestly think $400+ R9 390/390X on the desktop will save AMD? How many of those cards will sell? Historically the breakdown of flagship AMD vs. NV cards above $400 is at least 70% in favour or NV. Even if 390X beats GM200 in everything, it won't matter for sales. We know 390/390X won't because it will be limited to 4GB of VRAM and chances are GM200 will overclock better based on great overclocking of 960/970/980. AMD needs to focus more on $75-400 segments for laptops and desktops. I guess you and I will need to agree to disagree but if were my firm I would make sure there are great replacements for Pitcairn, Tahiti and Hawaii. If I had money left over, only then I would spend it for the $600 card performance crown.

The console situation is unique so that comparison doesn't fly. The typical volumes involved are an order of magnitude above mid-range GPUs.

Both NV & AMD have low volume HPC/Workstation products that generate disproportionate revenue & profits. Likewise for their top range. Think back to the 7970 release, they were raking in the $ selling it at $550, and their financials were in the black for awhile before NV hammered them.

Aiming for great perf/$ and targeting mid-range while losing the crown has NOT worked for AMD since day 1. AMD only mattered when they took the performance crown, with the Athlon and Opterons. The same for ATI, they only mattered when they had the absolute lead with the 9700 series.

Also, there's no possibility for them to compete on the low-end and mid-range this generation because there, efficiency matters more, with many gamers on low end rigs with weak PSU and cooling.

But on the high end? R390X selling for $600 with 50% perf gains and similar power use to R290X while being cool & quiet will sell out, utterly. That's money in the bank.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
The console situation is unique so that comparison doesn't fly. The typical volumes involved are an order of magnitude above mid-range GPUs.



Both NV & AMD have low volume HPC/Workstation products that generate disproportionate revenue & profits. Likewise for their top range. Think back to the 7970 release, they were raking in the $ selling it at $550, and their financials were in the black for awhile before NV hammered them.



Aiming for great perf/$ and targeting mid-range while losing the crown has NOT worked for AMD since day 1. AMD only mattered when they took the performance crown, with the Athlon and Opterons. The same for ATI, they only mattered when they had the absolute lead with the 9700 series.



Also, there's no possibility for them to compete on the low-end and mid-range this generation because there, efficiency matters more, with many gamers on low end rigs with weak PSU and cooling.



But on the high end? R390X selling for $600 with 50% perf gains and similar power use to R290X while being cool & quiet will sell out, utterly. That's money in the bank.


Good points but I'd argue that low to mid end now numbers could easily swing in favor of amd. I believe a good marketing strategy and massive perf/$ advantage could gain them market share.

Us cheapos that buy 260xs and gtx 650s really buy what we are told is better, via forums, peers, halo products etc.

The right promotion could do wonders. Eg. And 860k+260x bundles with an amd exclusive league of legends skin or riot points.
 
Feb 19, 2009
10,457
10
76
Good points but I'd argue that low to mid end now numbers could easily swing in favor of amd. I believe a good marketing strategy and massive perf/$ advantage could gain them market share.

Us cheapos that buy 260xs and gtx 650s really buy what we are told is better, via forums, peers, halo products etc.

The right promotion could do wonders. Eg. And 860k+260x bundles with an amd exclusive league of legends skin or riot points.

That's the halo effect. It washes well down in the entire range for the average uninformed consumer.

That's why car companies spend so much money on racing development and release supercars that sell in minuscule volumes as to not make a dent on their marketshare & financials. But it gives them the halo effect, otherwise known as "street cred".

NV has held the halo crown since AMD acquired ATI. AMD's plan to compete by going 2x small dies versus 1 of NV's large die has NOT worked. Multi-GPU users are a small niche so that approach was never gonna fly.

Currently AMD is offering excellent perf/$ with the R290/X but its not helping them much. The consumers know its old, hot (the damage was done via the reference blowers!), power hungry. They would gladly folk extra $ for 970/980. In the mid-range, 280/X/285 are getting hammered worse by the 960 which is similar in price & performance and much more efficient. In the low-end, AMD has nothing to compete with GM207.

So, how do they reverse the situation? Start from low or mid-range? With what? On 28nm, they had two new chips planned, Tonga and a 550mm2 big die. Tonga isn't going to compete well versus the 970/980 (which will be dropped to mid-range segment). Their only hope is that monster 550mm2 HBM 390X is smoking fast.
 
Feb 19, 2009
10,457
10
76

Outside of a few months lead to new nodes. NV has beat them at raw performance at every generation, perhaps with the exception of the 4870/4890 vs gtx275/280 (too close to call).

Fermi while late, pwned Evergreen. The gtx580 dominated the 6970.

When the 680 launched, it did beat the 7970, prompting the release of the 7970ghz but was quickly overshadowed by Titan, 780, 780ti. R290/X was slower than the 780/780ti when it mattered: at around launch for reviews & benchmarks.

In recent times, the R290/X has managed to surpass the 780/780ti, but it doesn't matter, because NV has moved on with Maxwell while AMD is still selling outdated GPUs. This is why this 390X delay has hurt them so much.
 
Feb 19, 2009
10,457
10
76
How come it doesn't count when Evergreen was pummeling Nvidia? And it did it for quite a long time.

It did count, 5800 series was one of the best from AMD. It also gave them a major boost in marketshare and they were in the black. But it wouldn't last as Fermi rolled in, particularly with the 460 after the disaster of the 480.

But as a whole, that generation, AMD lost the crown once the 480 landed and it was all downhill.
 

DooKey

Golden Member
Nov 9, 2005
1,811
458
136
AMD is on the verge of being a bit player in dGPU. 3XX needs to be revolutionary and AMD better get out there and trumpet it long and loud. I hope AMD really blows the doors off with their next GPU series. NV is going to pull an Intel on AMD (dominate the market and give us minimal performance gains) if 3XX is more of the same ol AMD.
 

nvgpu

Senior member
Sep 12, 2014
629
202
81
Those HD5000/6000 purchasers are regretting it now too, Fermi gets DX12 compatibility support while HD5000/6000 gets no DX12 support and not even AMD's API support which is DOA anyway now that MS is giving free upgrades to Win7 users.

Same with the HD4000 users, they bought those cheap but AMD killed their driver support 4 years later while GeForce 8/9/200 users are supported until 2016.

Nvidia may be more expensive then AMD but your investment is well protected years later.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Those HD5000/6000 purchasers are regretting it now too, Fermi gets DX12 compatibility support while HD5000/6000 gets no DX12 support and not even AMD's API support which is DOA anyway now that MS is giving free upgrades to Win7 users.

Same with the HD4000 users, they bought those cheap but AMD killed their driver support 4 years later while GeForce 8/9/200 users are supported until 2016.

Nvidia may be more expensive then AMD but your investment is well protected years later.
There's been quite a lot of talk about nvidia dropping Kepler support already. Also, since when would new API support for 5- and 8-year-old hardware matter? I'm not sure where you get your info, but all of the above is wrong or misinformed.
 

nvgpu

Senior member
Sep 12, 2014
629
202
81
You're the one thats misinformed and clearly wrong, everything I wrote is backed up by facts.

http://nvidia.custhelp.com/app/answers/detail/a_id/3473

The Release 340 drivers will continue to support these products until April 1, 2016, and the NVIDIA support team will continue to address driver issues for these products in driver branches up to and including Release 340.
http://blogs.nvidia.com/blog/2014/03/20/directx-12/

NVIDIA will support the DX12 API on all the DX11-class GPUs it has shipped; these belong to the Fermi, Kepler and Maxwell architectural families.
http://forums.anandtech.com/showthread.php?t=2420380
 
Last edited:
Feb 19, 2009
10,457
10
76
You're the one thats misinformed and clearly wrong, everything I wrote is backed up by facts.

http://nvidia.custhelp.com/app/answers/detail/a_id/3473

"After Release 340, any subsequent Windows driver release starting with Release 343 will cease to support the products listed in this section."

It means after 340, there's no more optimizations for those EOL products. That by definition is dropping support because any new feature or game fixes or optimizations will no longer apply moving forward.

It even specifies so:

"However, future driver enhancements and optimizations in driver releases after Release 340 will not support these products."
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
You're the one thats misinformed and clearly wrong, everything I wrote is backed up by facts.

http://nvidia.custhelp.com/app/answers/detail/a_id/3473

http://blogs.nvidia.com/blog/2014/03/20/directx-12/

There's no Kepler driver support drop, that thread starter is clueless and somehow confused GeForce 6/7 Curie hardware with GeForce 600/700 Kepler. Just like you posting misinformation about this.

http://forums.anandtech.com/showthread.php?t=2420380
That's a personal attack (x2) and not tolerated in these forums. Otherwise see the sources above.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
"After Release 340, any subsequent Windows driver release starting with Release 343 will cease to support the products listed in this section."

It means after 340, there's no more optimizations for those EOL products. That by definition is dropping support because any new feature or game fixes or optimizations will no longer apply moving forward.

It even specifies so:

"However, future driver enhancements and optimizations in driver releases after Release 340 will not support these products."

The list is of old products. The 8800 Series which was released over 8 years ago is finally seeing driver support end. Is this a big deal? The Fermi and newer products right now dont have an EOL announced. Chances are Fermi will be dropped around 2021 if the same time frame holds up as the 8800 series.
 
Feb 19, 2009
10,457
10
76
The list is of old products. The 8800 Series which was released over 8 years ago is finally seeing driver support end. Is this a big deal? The Fermi and newer products right now dont have an EOL announced. Chances are Fermi will be dropped around 2021 if the same time frame holds up as the 8800 series.

There's no deal, for it to be big or small. Its in the context of the post by nvgpu dissing (like all of his usual posts according to the history) on Radeons by claiming older nv stuff is still supported until 2016.. he obviously did not understand his own source to say such things, because from 340 onwards, there's no more support.

Its quite normal for older hardware to not receive updates or optimizations.
 

IlllI

Diamond Member
Feb 12, 2002
4,929
11
81
AMD is in a world of hurt. Kind of sad b/c I don't want to see them go under
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Well, without a new product line for more than 12 months and with good competition from NV with Maxwell v1/v2 it was expected.

Lets see how thinks will go in 2015 when NV will only release a single GPU (GM200) and what/if any damage will have from the GTX970.

But even with all that i dont expect NV to loose a lot of ground, they will still sell more volume (mainly low-end Desktop/Laptop SKUs). Im also curious to see if Carrizo will have any impact in that area since it seams to have adequate iGPU performance to be used alone without any dGPUs.
And dont forget that Skylake will also eat a lot of Low-End dGPU market share in H2 2015 as well because we dont expect any new entry dGPUs from NV.

So at the end of the year thinks may look way different.
 
Apr 20, 2008
10,067
990
126
Why would anyone buy an outdated, 3 year old Pitcairn/Tahiti that doesn't support variable refresh rate for gaming, doesn't support 4K H.264 decoding, doesn't support 4K HEVC decoding and pretty much far less power efficient than GM206 GTX 960?

Nvidia released 3 full top to bottom lineups, Fermi, Kepler & Maxwell while AMD can only rebrand Pitcairn and Tahiti not once, but twice, HD8000 and R9 200.

$130 vs $200. Similar performance. Both have a single 6 pin PSU connector. DX12 compatible. Plays almost every game perfectly.

A rule of thumb is to spend as much on your monitor as the video card, and what monitors have variable refresh rates?

Only an idiot would pair a 960/270 to a 4k monitor for gaming. Those features you listed are non-sequiturs for that price bracket.
 

rennya

Junior Member
Nov 27, 2007
10
0
66
1. Modern systems such as i7 4790K handle 4K video just fine. Not that it matters because what fool will have a 4K monitor and a $200 960? You keep talking about this as if it's relevant for gamers. Even 970/980 don't have native 4K decoding and they are selling well.

Interesting. Can you decode this clip, or this one, or the UHD sample (first clip) from this page, with your i7 4790k alone without any aid from your GPU? I dare say you will not be able to. No single-socket CPU out there right now can decode these 4K clips without dropping frames.

High-end HTPC hobbies are even more demanding than playing any AAA-grade games out there. Can you name any games, from the past, or the ones available now, or even the one that will be published in this year or 2016, that is more demanding that madshi Video Renderer? Protip: You can't.

That's why, in home theater circles, they called madVR 'the new Crysis'.

GTX970/980 do have 4K decoding, at least for H.264 and 8-bit HEVC (hybrid).
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
^ 960's 4K decoding is worthless for gamers. Using your logic 285 is a better gaming card than a 295X2 because it has 4K decoding:
http://www.anandtech.com/show/8460/amd-radeon-r9-285-review/4

Many people have had no trouble encoding 4K on a 4790K:
http://forum.doom9.org/showthread.php?p=1694699

Try to follow the context of picking a videocard for gaming purposes. 290 is 50% faster than a 960 with double the VRAM for $60 more. If you need an HTPC card, 750Ti is $110. No need to get a 960.

But I could easily rebuttal 4K encoding/decoding on a GPU anyway because the IQ compared to CPU encoding is GARBAGE!

"For the time being, the best option for quick, high-quality video transcoding is unfortunately to buckle down, get yourself a fast CPU, and run the best software encoder you can find (which may be Handbrake)."
http://techreport.com/review/23324/a-look-at-hardware-video-transcoding-on-the-pc/6

For someone who wants to decode video for a tablet or a smartphone, QuickSync is faster than any GPU solution from AMD and NV. And 99.99% of smartphones and tablets are not even 4K! As far as someone needing true high quality encoding, you will want a multi-core CPU. GPU quality doesn't compare. Therefore when anyone mentions 4K encoding on a GPU as a selling point, it's like they pulled it straight out of some marketing document. In the real world it's not practical, not to mention who is going to buy a $200 graphics card and buy a $1000 high quality IPS 4K monitor?

Considering less than 1% of PC gamers on Steam have 4K monitors and considering that 960 didn't even count into Q4 2014 market share gains for NV, suggesting that 4K encoding was a big factor in gamers choosing NV over AMD has no merit or evidence whatsoever.
 
Last edited:

rennya

Junior Member
Nov 27, 2007
10
0
66
^ 960's 4K decoding is worthless for gamers. Using your logic 285 is a better gaming card than a 295X2 because it has 4K decoding:
http://www.anandtech.com/show/8460/amd-radeon-r9-285-review/4

I'm not talking about video gaming here. What I'm talking about is high-level HTPC applications, which is more demanding than playing any AAA games out there. Like I asked before, what games out there are more demanding on the GPU than madVR?

I believe that's why nvgpu is talking about with all those features he mentioned.

Many people have had no trouble encoding 4K on a 4790K:
http://forum.doom9.org/showthread.php?p=1694699

We are talking 4K DECODING here (read your post). Even an i5 SB CPU can do 4K encoding (AVC or HEVC) on software. Now I repeat the question: Can you play the three clips I mentioned, using CPU alone, without dropping frames.?

But I could easily rebuttal 4K encoding/decoding on a GPU anyway because the IQ compared to CPU encoding is GARBAGE!

"For the time being, the best option for quick, high-quality video transcoding is unfortunately to buckle down, get yourself a fast CPU, and run the best software encoder you can find (which may be Handbrake)."
http://techreport.com/review/23324/a...ng-on-the-pc/6

As we are talking DECODING here, in the context of nvgpu's posts, go to doom9 forums, read some threads and you will know that software decoding and GPU decoding with gives out exact same output.

For someone who wants to decode video for a tablet or a smartphone, QuickSync is faster than any GPU solution from AMD and NV. And 99.99% of smartphones and tablets are not even 4K! As far as someone needing true high quality encoding, you will want a multi-core CPU. GPU quality doesn't compare. Therefore when anyone mentions 4K encoding on a GPU as a selling point, it's like they pulled it straight out of some marketing document.

You are behind the times man, nVidia has already catched up to Intel in this regard. AMD should be doing the same thing too with Carrizo. Oh BTW, Intel do not have full-bitstream 4K HEVC decoding support yet.


In the real world it's not practical, not to mention who is going to buy a $200 graphics card and buy a $1000 high quality IPS 4K monitor?

Considering less than 1% of PC gamers on Steam have 4K monitors and considering that 960 didn't even count into Q4 2014 market share gains for NV, suggesting that 4K encoding was a big factor in gamers choosing NV over AMD has no merit or evidence whatsoever.

The answer is, people in HTPC circles. The ones that paired a $200 GPU with $10k projectors. People who will not tolerate AMD screwing up OpenCL GPU copy operations in their drivers for 14 consecutive releases.

Oh BTW, HTPC users, at least the high-end anyway, do not buy IPS displays because of their poor contrast ratio. If you think 1500:1 static contrast ratio is good, you will be laughed out from the room. It should be at least double that.

As of now, the best GPU for madVR is R9 290x with 13.12 drivers. Cannot use newer drivers because of AMD's incompetence. If 3xx GPUs comes out and the OpenCL copy operation bug isn't fixed (AMD knows about this already), HTPC users will not touch that product with a ten-foot pole.

You have to wake up to reality and see that high-end gaming is not the only best use for gaming GPUs. There are other uses for them too. High-end HTPC on 4K displays is another one of them, with madVR being more demanding to the GPU than any games (present or the future) that you will be able to name.

Now let see the first nvgpu post in this thread.

Why would anyone buy an outdated, 3 year old Pitcairn/Tahiti that doesn't support variable refresh rate for gaming, doesn't support 4K H.264 decoding, doesn't support 4K HEVC decoding and pretty much far less power efficient than GM206 GTX 960?

Nvidia released 3 full top to bottom lineups, Fermi, Kepler & Maxwell while AMD can only rebrand Pitcairn and Tahiti not once, but twice, HD8000 and R9 200.

He mentioned 4 features there. The first one is about G-sync and Freesync, the next two is about video DECODING (and not encoding) and lastly, power consumption. All four of those features applies to video decoding (yes, G-sync and Freesync can be used for video playback too), while only two applies to gaming (first and last one).

He (or she) is talking about video decoding (not encoding) in the first place, and less about gaming. So let's keep the discussion related to nvgpu's posts to video decoding only.