ATI overtakes nVidia in discrete graphics marketshare

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

busydude

Diamond Member
Feb 5, 2010
8,793
5
76
From HardOCP, some interesting breakdown:

Discrete desktop graphics: AMD only has 44.5% - but that's an 11-point gain, which came directly from nVidia's 11-point loss.

Discrete mobile graphics: AMD now has 56.3%, a result of a 2.4-point gain.

In light of this report, is it safe to say that nvidia's optimus technology failed to help strengthen its bottomline?
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
In light of this report, is it safe to say that nvidia's optimus technology failed to help strengthen its bottomline?
Hard to tell, really. For all we know, this is solely because of the DX11/40nm delay, and without Optimus they might have lost even more. Just looking at the figures alone, we can't determine if Optimus did nothing or was an absolute god-send, as there are other factors involved as well.
 

Mr. Pedantic

Diamond Member
Feb 14, 2010
5,027
0
76
In light of this report, is it safe to say that nvidia's optimus technology failed to help strengthen its bottomline?
I still don't see a lot of Optimus notebooks around. Most people here don't know what it is. And most of the people who do probably don't understand why it is so important. From this perspective it's yet to make a definite impact on the notebook market, and of course, by the time it does, AMD may have come up with their own thing. But I think it will make a big difference. For one thing, it almost definitely means the end of crappy Intel graphics in notebooks, if the only real cost to having a high-powered AMD or Nvidia GPU in it is monetary when users gain flexibility at a minimal heat and power footprint.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
It's like nVidia's 8800 era again, but with AMD, but I don't think it will be as dominating as nVidia was on that time though. But it was bond to happen, that's why I love competition! But its quite strange that so many bad stuff is happening to nVidia, the mac thing, the fermi problem at first, lost market share, the Rambus crap, less profits, bad sales of some of their SKU, I think that JHH arrogance is driving down nVidia, he should step down and let fresh blood to drive the company, I hope he doesn't transform to an Hector Ruiz, he's following his steps!!

yeah, either it's jhh's arrogance...or, maybe it's because they were almost a year late to the game when one of the most popular os releases ever came out with a new dx. hmm, I think I'll take B. jhh has many problems, but as my old boss used to say, "you can't shine a turd". gf100 has been a huge disappointment, and honestly it was set up to fail from the beginning. they'll fix their problems, as they've already started to with gf 104, and will be much more competitive in the next couple of years.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Technically AMD is still using an old architecture for DX11, while NVIDIA is using a new architecture. So until AMD launches Northern Islands, I consider them to be behind.

Weren't you already warned once about derailing threads?
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Why do you think ATi sold out to AMD?
Not because business was that good, I can tell you.
If ATi had continued as an independent company, they probably would not have survived the HD2900 fiasco.

AMD poured a truckload of money into ATi. They could do that because the CPU market is much larger and more profitable, so it's not that difficult for a CPU company to sustain a GPU company. The other way around would be impossible though.



That's what all those other companies said :)
You know, 3DFX, S3, Trident, Matrox, Tseng Labs etc.
Most of these video card companies were pretty big at one time, but they made one mistake, and they were out of the race.

nVidia is not doing that badly yet though. While us guys on forums have been worrying about the fact that nVidia didn't have a good DX11 lineup, lots of people still bought their DX10/10.1 products.
And nVidia seems to be getting the situation under control now. The GTX460 is very promising, and there will be some lower end models next month, which will probably be equally promising.

I think nVidia has the advantage of being the 'Intel' of the GPU world. They're bigger than AMD, and their brand is much stronger. So they'll still have big OEM deals and lots of people buying nVidia just for the brand, even if their product lineup isn't that flashy for a while.

So far AMD seems to have just about caught up in terms of marketshare. But I think this is as far as it will go. GTX460 and its lower end cousins will probably claw back marketshare from now on.

NOBODY is like intel in the gpu market. specifically in this case, amd has a significant process advantage over nvidia, which helped to keep them afloat during some of those lean years and now just helps to keep them on top. intel is killing amd on the cpu side in profits, marketshare AND process.

Don't get me wrong, I'm not counting nvidia out by any means. they ran all their rivals out of business (ati as mentioned would not be around if they hadn't been bought out) and have significant mindshare with customers still.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
yeah, either it's jhh's arrogance...or, maybe it's because they were almost a year late to the game when one of the most popular os releases ever came out with a new dx. hmm, I think I'll take B. jhh has many problems, but as my old boss used to say, "you can't shine a turd". gf100 has been a huge disappointment, and honestly it was set up to fail from the beginning. they'll fix their problems, as they've already started to with gf 104, and will be much more competitive in the next couple of years.

I think that there's more possibilties of polishing a turd than straighten back JHH. http://dsc.discovery.com/videos/mythbusters-polishing-a-turd.html

They don't, both outsource production to TSMC, and nVidia is the larger customer.

I think that he's referring to the manufacturing process adoption. AMD always had the upper hand in being first in the manufacturing process for a while, it started since the X1800 series, being first at the 90nm manufacturing process, being first at the 80nm process with the RV570, being first with the 55nm and the same goes for the 40nm with the RV740. For some reason, AMD is able to fit more transistors per mm2 than nVidia, (Transistor Density), but Intel is the leader in that field.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
I think Wreckage's point is something like this:
Where nVidia can go with a refresh for their next generation of DX11 cards, AMD will have to invest in a new architecture.
There is more risk involved there. AMD needs to come up with better tessellation and GPGPU logic to match or exceed nVidia's offerings. In doing that, they risk giving up their current advantages.

I agree, the big problem with that statement was not that it was made but that wreckage made it so everybody immediately discounted it.

Nvidia chose to take the hard path, bite the bullet and roll out their new architecture right now instead of taking an iterative approach with gtx 4xx. If amd had nothing new/interesting on the horizon then this would signal a big advantage for nvidia. unfortunately, amd has been steadily working on their new architecture and will have it out in a few months. unlike nvidia, amd does not have somebody trying to make their gpu be everything to everybody at once, so amd can focus on just building badass video cards for the consumer market. I think that we'll see SI as a strong, though probably not dominant, competitor vs fermi II.

Really, the biggest problem this gen was not the absolute performance delta (very similar to 4xxx vs gtx 2xx btw), but the fact that fermi was 10 months late. If nvidia had been 10 months early then all the power/heat/etc flaws wouldn't have meant jack shit to consumers and amd would have been vilified instead. Nvidia needs to come through with a timely release this time and they'll be fine imho.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
unlike nvidia, amd does not have somebody trying to make their gpu be everything to everybody at once, so amd can focus on just building badass video cards for the consumer market. I think that we'll see SI as a strong, though probably not dominant, competitor vs fermi II..

Are you sure? I would have thought gpu compute will be the main focus for AMD. They make a fortune from cpu's in the high end compute market. If fermi makes most of them redundant, and they don't have anything to compete with they loose out big time.

The consumer video market is not growing, if anything it's shrinking, for that matter x86 is looking like a slowly dying monster being eaten by billions of tiny arm processors.

While nvidia with fermi/telsa/tegra is not winning right now they are at least taking that hit to enable them to be better positioned for the future. AMD needs to do the same - look at Intel, they've spent a fortune trying to do just that with larabee and atom.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
AMD doesn't really have a choice anyway.
GPGPU is now a required part of the DX11 API, and if AMD wants to keep Apple as a customer, they better make sure that OpenCL works as well.

So far, AMD has been trying to frustrate the adoption of GPGPU in any way that could, but it's only a matter of time until that killer app arrives.
Perhaps Adobe's CS5 suite is already that killer app for GPGPU, we'll have to see.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
I agree, the big problem with that statement was not that it was made but that wreckage made it so everybody immediately discounted it.

Nvidia chose to take the hard path, bite the bullet and roll out their new architecture right now instead of taking an iterative approach with gtx 4xx. If amd had nothing new/interesting on the horizon then this would signal a big advantage for nvidia. unfortunately, amd has been steadily working on their new architecture and will have it out in a few months. unlike nvidia, amd does not have somebody trying to make their gpu be everything to everybody at once, so amd can focus on just building badass video cards for the consumer market. I think that we'll see SI as a strong, though probably not dominant, competitor vs fermi II.

Really, the biggest problem this gen was not the absolute performance delta (very similar to 4xxx vs gtx 2xx btw), but the fact that fermi was 10 months late. If nvidia had been 10 months early then all the power/heat/etc flaws wouldn't have meant jack shit to consumers and amd would have been vilified instead. Nvidia needs to come through with a timely release this time and they'll be fine imho.

Actually, I would rather AMD NOT follow Nvidia's example and try to make a video card into something it's not. The fanboys can spew architecture BS all they want, but in the end you have a NV gpu that's a lot hotter, bigger, more expensive, and half a year late, with only a small performance advantage to show for it. As a consumer, it's exactly the opposite of what I'm looking for in a video card.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Actually, I would rather AMD NOT follow Nvidia's example and try to make a video card into something it's not. The fanboys can spew architecture BS all they want, but in the end you have a NV gpu that's a lot hotter, bigger, more expensive, and half a year late, with only a small performance advantage to show for it. As a consumer, it's exactly the opposite of what I'm looking for in a video card.

Yea, that's easy to say now.
But allow me to point out that the GeForce 8800 was the exact same story.
That's where Cuda started. An original GeForce 8800 can run DirectCompute, OpenCL and PhysX, aside of standard D3D and OpenGL workloads.

Compare that to the Radeon HD2900... It was half a year late, it was expensive, hot, slow, and it did NOT offer the extras.
The 3000-series wasn't capable of OpenCL/DirectCompute either.
4000-series can do OpenCL/DirectCompute, but performance is quite poor compared to its nVidia counterparts.
Only the 5000-series is really a viable option... but pretty much all the software was written for Cuda so far.

So it's not like nVidia's strategy is a recipe for disaster. On the contrary. Their previous attempt was a resounding success. Not only in graphics, but it also laid the groundwork for today's GPGPU frameworks and applications.
It took AMD 3 generations to catch up.

So, Fermi might not be as successful as the 8800 series was, but nVidia is quickly turning things around with the GF104.
I most certainly do not hope that they change their strategy. It's great that at least one company is still out there pushing boundaries and coming up with new ways of doing things (Cuda on a Fermi is now WAY ahead of anything OpenCL or DirectCompute, and is going to be even more of a threat to x86).
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Yea, that's easy to say now.
But allow me to point out that the GeForce 8800 was the exact same story.
That's where Cuda started. An original GeForce 8800 can run DirectCompute, OpenCL and PhysX, aside of standard D3D and OpenGL workloads.

Compare that to the Radeon HD2900... It was half a year late, it was expensive, hot, slow, and it did NOT offer the extras.
The 3000-series wasn't capable of OpenCL/DirectCompute either.
4000-series can do OpenCL/DirectCompute, but performance is quite poor compared to its nVidia counterparts.
Only the 5000-series is really a viable option... but pretty much all the software was written for Cuda so far.

So it's not like nVidia's strategy is a recipe for disaster. On the contrary. Their previous attempt was a resounding success. Not only in graphics, but it also laid the groundwork for today's GPGPU frameworks and applications.
It took AMD 3 generations to catch up.

So, Fermi might not be as successful as the 8800 series was, but nVidia is quickly turning things around with the GF104.
I most certainly do not hope that they change their strategy. It's great that at least one company is still out there pushing boundaries and coming up with new ways of doing things (Cuda on a Fermi is now WAY ahead of anything OpenCL or DirectCompute, and is going to be even more of a threat to x86).

Cuda and GPGPU are an integral part of Nvidia's business strategy, not in small part because they have no CPU business. AMD does have a CPU business, and are also working on bringing a CPU/GPU fusion product. That fusion technology, IMO, is the future of computing, as opposed to trying to force a GPU to do CPU tasks.

My main point, however, still rests on the fact that I only buy video cards for gaming, and not scientific computing. Therefore, whichever company offers the most bang/buck in gaming gets my money.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Cuda and GPGPU are an integral part of Nvidia's business strategy, not in small part because they have no CPU business. AMD does have a CPU business, and are also working on bringing a CPU/GPU fusion product. That fusion technology, IMO, is the future of computing, as opposed to trying to force a GPU to do CPU tasks.

I don't see Fusion becoming a big success in the next 5-10 years.
There is a fundamental memory bottleneck problem when integrating a CPU and GPU in the same package and sharing the memory controller + memory modules.

Aside from that, there are fundamental problems with combining a high-performance CPU with a high-performance GPU in a single package.

Therefore, I don't see how Fusion could become a threat to high-end discrete GPGPU solutions. It's interesting as a budget solution, but nothing more.

My main point, however, still rests on the fact that I only buy video cards for gaming, and not scientific computing. Therefore, whichever company offers the most bang/buck in gaming get's my money.

My point is that gaming is being redefined as we speak. GPGPU is not just 'scientific computing'. Heck, even a regular GPU already does mostly matrix and vector math operations, which were once strictly the field of 'scientific computing' as well.
Now we also have things like path finding, AI, physics, and post-processing, which are being offloaded to the GPU.
So the future may be that the best gaming GPU is the one that is also the best GPGPU. nVidia is doing their best to pursue this path... And Intel will likely follow nVidia if they ever get a competitive GPU on the market.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
A good point for what? When you boot up your PC, or play a game, do you actually notice "the architecture", or do you notice the performance factors: speed, image quality, loudness, perhaps maybe even heat?

It doesn't matter what's under the hood. People playing games won't care or notice. OEM's won't care or notice. All that's noticed are the performance characteristics.

If it takes a new architecture to achieve certain performance goals, then so be it, they better roll out a new one. If not, then they can rehash an old architecture. Nobody cares, and no consumer and OEM will really notice, as long as the performance goals are delivered.

What you are saying is true, but what happens when AMD needs to produce a new architecture on a new process? Doesn't that involve a steep learning curve?
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Scali, your bias toward nVidia is strong here. We all know that GPGPU performance is important, but unlike nVidia, AMD owns a CPU market and will not make sense creating a GPU with all the computing features and stuff which will affect its CPU market share. nVidia is a GPU company trying to become a CPU company, AMD doesn't need to do that. They can couple their best current CPU like a rack of Phenom II X6 coupled with a couple of Radeon HD 5x00 series and will be faster than any rack containing Fermi based workstations, why? Because not all code can be parallelizable and not all can algorithms can benefit of porting them to a GPU, GPU's are highly parallelizable but are terrible general purpose processors, something that nVidia lacks and AMD can do with their CPU's. General performance matters more than specialized performance, look at the PS3 vs the Xbox 360, theorically the PS3 processor is better, and yet, Xbox 360 experiments less slow downs and higher resolutions.
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
Technically AMD is still using an old architecture for DX11, while NVIDIA is using a new architecture. So until AMD launches Northern Islands, I consider them to be behind.

So in other words, AMD's "old" architecture is better than Nvidia's "new" architecture and AMD's "new" architecture is right around the corner.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Scali, your bias toward nVidia is strong here.

Yea, it's always bias, isn't it? Bias this, bias that.

We all know that GPGPU performance is important, but unlike nVidia, AMD owns a CPU market and will not make sense creating a GPU with all the computing features and stuff which will affect its CPU market share.

I think it would make a lot of sense for AMD. They cannot compete on performance with Intel using regular CPUs.
By adding GPGPU capabilities they can gain an advantage. Apparently AMD thinks the same, hence Fusion.

nVidia is a GPU company trying to become a CPU company, AMD doesn't need to do that. They can couple their best current CPU like a rack of Phenom II X6 coupled with a couple of Radeon HD 5x00 series and will be faster than any rack containing Fermi based workstations, why?

Makes no sense whatsoever. A Fermi-based workstation or server still needs CPUs. A GPGPU is not self-sufficient. It's a co-processor. Might aswell be the same Phenom II X6, or even better: Xeons.
Something like this:
http://www.eweek.com/c/a/IT-Infrastructure/Nvidia-Unveils-FermiBased-Tesla-GPUs-600909/

You see the problem AMD has... These systems contain both CPUs and GPUs, but neither are AMD's.
 

faxon

Platinum Member
May 23, 2008
2,109
1
81
Scali, your bias toward nVidia is strong here. We all know that GPGPU performance is important, but unlike nVidia, AMD owns a CPU market and will not make sense creating a GPU with all the computing features and stuff which will affect its CPU market share. nVidia is a GPU company trying to become a CPU company, AMD doesn't need to do that. They can couple their best current CPU like a rack of Phenom II X6 coupled with a couple of Radeon HD 5x00 series and will be faster than any rack containing Fermi based workstations, why? Because not all code can be parallelizable and not all can algorithms can benefit of porting them to a GPU, GPU's are highly parallelizable but are terrible general purpose processors, something that nVidia lacks and AMD can do with their CPU's. General performance matters more than specialized performance, look at the PS3 vs the Xbox 360, theorically the PS3 processor is better, and yet, Xbox 360 experiments less slow downs and higher resolutions.
i would hardly call his position biased. in case you havent noticed, scali is a professional software developer, and we all how how big nvidia is on software developer relations right now. like it or not, CUDA has evolved into an extremely powerful GPGPU platform over the last few years, and the tweaks added to fermi specifically for it are having a huge impact on performance in that respect. you do have a point however about the workload involved playing the biggest role. anyone who does distributed computing can tell you first hand how this can work. i choose all ATI parts for my DC farms because the apps im running run up to 5 times faster on a 5870 than a 480 (both at stock), since the mix of work is able to saturate ATI's 1+4 SPs incredibly well. it's rare that you see applications taking this much advantage of an architecture's peak throughput however, but when coupled with a fast general purpose CPU and the right workload, cypress has proven that it can still stretch its legs and bring home the gold. it's just harder to extract that level of performance the same way you can for most applications using CUDA, since it requires a lot more developer optimization. ATI will no doubt still focus their next gen arch upon improving in these areas, areas that nvidia has been focusing for several generations now. the biggest disadvantage ATI has right now is that nvidia is supporting C++ code execution on their GPUs, which makes developing software for fermi many times easier, since developers are already very familiar with C++. they just need to learn the differences between coding C++ for x86 CPUs vs Fermi and then they can start optimizing away
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
bring on GTX475 vs HD6850, great times ahead

I don't think GTX475 will stand any chance against the 6850. NV needs something to counter 5670, 5750/5770 and 5830 <$200 price bracket to improve their profit margins.

The problem is the negative market perception their cards got when they launched in the Spring. While a GTX470 at $350 was not a good deal compared to 5850s for $290, today you can find 470s for $280 which makes 5870s competely overpriced. Still, most consumers won't spend $280 (and it's not like Bestbuy sells these cards at these prices!). So on average, NV's selling prices are still not that low imo, with no new gen cards for sale <$200.
 

golem

Senior member
Oct 6, 2000
838
3
76
So in other words, AMD's "old" architecture is better than Nvidia's "new" architecture and AMD's "new" architecture is right around the corner.

I'm not seeing this.

5970 stands alone for now, but...

GTX 480 > 5870
GTX 470> 5850
GTX 465 is mostly crap but so is 5830 (compared to cards above or below)
GTX 460 > 5830

5770 and below stands alone also. But for segments that Nvidia actually has cards for, ATI's "old" architecture is not as good.
 
Last edited:

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
I'm not seeing this.

5970 stands alone for now, but...

GTX 480 > 5870
GTX 470> 5850
GTX 465 is mostly crap but so is 5830
GTX 460 > 5830

5770 and below stands alone also. But for segments that Nvidia actually has cards for, ATI's "old" architecture is not as good.
Something to consider: if AMD clocked its cards up to the TDP's that NVIDIA has, how do you think the cards would then compare? I think the Fermi architecture has potential, but its current implementation (besides the GTX 460) just plain sucks.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Yea, that's easy to say now.
But allow me to point out that the GeForce 8800 was the exact same story.
That's where Cuda started. An original GeForce 8800 can run DirectCompute, OpenCL and PhysX, aside of standard D3D and OpenGL workloads.

Compare that to the Radeon HD2900... It was half a year late, it was expensive, hot, slow, and it did NOT offer the extras.
The 3000-series wasn't capable of OpenCL/DirectCompute either.
4000-series can do OpenCL/DirectCompute, but performance is quite poor compared to its nVidia counterparts.
Only the 5000-series is really a viable option... but pretty much all the software was written for Cuda so far.

So it's not like nVidia's strategy is a recipe for disaster. On the contrary. Their previous attempt was a resounding success. Not only in graphics, but it also laid the groundwork for today's GPGPU frameworks and applications.
It took AMD 3 generations to catch up.

So, Fermi might not be as successful as the 8800 series was, but nVidia is quickly turning things around with the GF104.
I most certainly do not hope that they change their strategy. It's great that at least one company is still out there pushing boundaries and coming up with new ways of doing things (Cuda on a Fermi is now WAY ahead of anything OpenCL or DirectCompute, and is going to be even more of a threat to x86).

sounds like you drank the coolaid dude.

jhh is pushing all these extraneous features of discrete video cards because he sees that both intel and, especially, amd are rapidly approaching a point where the cpu + gpu designs will render low/mid range gpus obsoleet. The low end will go first with SB and llano. Even if they screw those up, certainly within 5 years low end will be gone. low/mid will follow soon afterwards. Is it a stretch to say that nvidia will be in deep shit at that point if they don't have something else to do? amd, otoh, can continue designing the low/mid integrated gpus for their cpu division and the discreet cards for their gpu division. so amd doesn't need to design a bunch of crap into their cards that won't be mainstream until the card's designed lifespan has long expired. did I say that cuda/directcomputer/etc is bad? hell no, I personally like it a ton and use cuda to run seti on my computers. However, I recognize that I'm in the minority here. Gpus are still about making crysis run at 60 fps with 4xAA at 19x12, not about tesla mumbo jumbo. Give it a few years and yes, the market will continue to evolve, possibly in unexpected ways, just don't tell me that 8800 series was awesome because I can buy one to run physix on my current rig.