What happens to nvidia?

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
you really can't claim to divine "engineering superiority" based on end-user experience with the product.

Very true.

Although you can compare the final products success and that is what the engineering is about.

In this case, the end objective is to sell and make money out of the product.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Your post is rife with spin.

Counter spin. That is the point- you list off something ATi's parts do better, I list off something nV's parts do better. What were the engineers for each company given as design goals? Based on everything we heard leading up to the launch of both lines of products, the engineers developed what the companies were shooting for. That is my part in this discussion at the moment.

This is how they market the cards themselves, makes sense to compare them as such ?

Marketing doesn't tend to be engineers' favorite people. Sometimes the engineers get very lucky, but not most of the time.

Trying to claim NV's goal was to come out with the fastest multi card solution for gaming is a real stretch, especially given their history of producing multi-gpu single cards

Do you realize that the 295 didn't hit until after a die shrink for the GT200 line? Do you realize that the 9800 GX2 didn't hit until after a die shrink for the G80 line? Do you realize that the 7950 GX2 didn't hit until after a die shrink for the G70 line? Are you starting to see a trend? :)

I don't work in this industry, I would think though, from the perspective of those engineers, quality engineering on their part would go beyond just performance and touch on things like heat output and power consumption.

The GF100 is *by far* the highest performing part in the HPC space on a per watt basis. Different engineers, different goals.

How many GeForces did NVIDIA sell for $5000?

http://www.berkcom.com/SuperMicro/6016GT-TF-TM2-22.9N-48R-3TSA.php

The $5K number was a rough estimate. Those are last generation Tesla setups, the new ones are still in the inquiry only pricing bracket.
 
Last edited:

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Your post is rife with spin. AMD is not focused on GPGPU, yes it's relevant to this forum, but if you want to compare the two of their cards, then feel free to compare their workstation products for the GPGPU aspect and their consumer based cards on the gaming aspect.

This is how they market the cards themselves, makes sense to compare them as such ?

Two physical video cards is not a relevant comparison to one physical video card. For a lot of users out there, buying two video cards is a hurdle they won't jump over, but buying one is comfortable for them. Trying to claim NV's goal was to come out with the fastest multi card solution for gaming is a real stretch, especially given their history of producing multi-gpu single cards and the complete vacuum of said part with their current generation. Even the 460 looks to be too power hungry and would break through the 300W power envelope in a dual gpu single card configuration.

I could care less who sells what or makes what profit. That is not relevant to me or anyone else who is interested in getting themselves a good performing general consumer based card for gaming. This is what GTX 480s, 5970s and 5870s are for, gaming.

Fact is, it's going to be a huge win and massive slamdunk if AMD's 6870 flagship comes in with a lower power draw and heat output than a GTX 480 along with superior performance.

I don't work in this industry, I would think though, from the perspective of those engineers, quality engineering on their part would go beyond just performance and touch on things like heat output and power consumption. Especially as these qualities will inhibit performance levels. They certainly have inhibited what the GF100 has been able to offer the consumer.

We can check back in on the power draw and heat output and performance numbers of AMD's 6 series next month and their flagship part in a couple months.

As for NV's changes in their next generation, well we don't know when we'll see those numbers, sometime mid to late next year most likely, they're sort of behind AMD's release schedule currently. Again, it is not my field, so I don't know if AMD being ahead there is a superiority or not :)

if amd isn't looking at gpgpu then they are missing a very large and profitable boat. as ben mentioned before, amd and nvidia had different design priorities this gen. If nvidia had tried to make a gf104 high end this gen then we would likely have seen a much different battle. I don't expect them to make that mistake again, they'll probably spend a bit more differentiating development of professional from gaming cards in the future. if amd doesn't stay on their toes nvidia will be back on top again when 28nm comes out.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
Counter spin. That is the point- you list off something ATi's parts do better, I list off something nV's parts do better. What were the engineers for each company given as design goals? Based on everything we heard leading up to the launch of both lines of products, the engineers developed what the companies were shooting for. That is my part in this discussion at the moment.

Perfectly fair, but you are listing functionality that is relevant to people who are not using gf100 for gaming usage. I think it's important to make that distinction. Perhaps we are coming from differenct perspectives as that is what I am speaking towards:

AMD's superiority in delivering gaming video cards not HPC product.


Marketing doesn't tend to be engineers' favorite people. Sometimes the engineers get very lucky, but not most of the time.

No one likes marketers, like lawyers :D I guess I need to be more anal in my wording to leave less holes for exploitation.

The GTX lineup are cards for gaming, the Quadro line are the HPC cards you are discussing and wanting to compare to the Radeon lineup. Doesn't make sense, compare the GTX line to the Radeon line. The Quadro lineup is a different market.

Do you realize that the 295 didn't hit until after a die shrink for the GT200 line? Do you realize that the 9800 GX2 didn't hit until after a die shrink for the G80 line? Do you realize that the 7950 GX2 didn't hit until after a die shrink for the G70 line? Are you starting to see a trend? :)

I do realize that.

And TSMCs next process being 28nm, NV will not have a counter to AMD's 6 series or a dual-gpu card until 28nm is ready then ?

That's a long ways off in video card lifecycles. So we can expect something sometime in the third quarter next year ?

As a user of consumer level GPUs that kind of stinks for me when I look at AMD bringing me a new level of performance above what NV has in the coming months.

The GF100 is *by far* the highest performing part in the HPC space on a per watt basis. Different engineers, different goals.

So, this card is what the engineers were designing for and achieved those goals for HPC functionality ?

http://sabrepc.com/p-498-pny-nvidia-quadro-6000-6gb-pci-express-20-x16-retail.aspx

But this card is for gaming, and this card was an afterthought ?

http://ncix.com/products/?sku=51900&...nufacture=eVGA

If that is the case that makes sense then that GF100 has been not been too successful in the gaming market if they are designing for gamers as an afterthought now, and would explain why AMD is leaping ahead in the speed they are delivering higher performing gaming products. Perhaps there is going to be a shift in the gaming market to AMD being the leader while NV puts more focus to the HPC market. It looks like GF100 put them in this position.

I was not really discussing the first card as you were, I'm speaking to the second card and how it contrasts to AMD's offerings. I can't enter a discussion to the first card as I don't make use of anything that card is geared towards, I use the gf100 in its gaming iteration. The second card.

From the general consumer standpoint, a gaming user, the first card and what it can do is completely irrelevant, regardless of it sharing the same core as the second card. It's purpose, function and abilities hold no merit for the consumer market.

If the GF100 was designed solely as an HPC product then going on what you said it is a huge success, I have no context for the HPC functionality, I will take your word for it.

But, it has also been used as a product for entertainment and gaming purposes, there must of been some thought and engineering put into it for this purpose as well I would assume, where it has not achieved the same success.

If we look at the whole picture and all possible functionality of both companies, it's no problem to state that Nvidia is dominating the HPC market and AMD is dominating the consumer level gaming market.

I can accept that. All evidence available to us certainly points to that.
 
Last edited:

bryanW1995

Lifer
May 22, 2007
11,144
32
91
amd isn't dominating anything, but they are certainly much more competitive in the discreet gpu market than they were a year ago. if amd wants to "dominate" then they should roll out a competitive software suite with 6xxx; you know, something less like CCC and more like nvidia's. I don't agree with the whole "amd drivers suck and nvidia drivers rock" argument, but overall software support nvidia is clearly superior. In fact, many gamers this gen will probably hold out for nvidia again and/or pay more for a louder/hotter/slower card b/c they have a better overally ownership experience with nvidia. amd needs to spend some $$ and get their gpu marketing dept more in line with their gpu manufacturing.
 
Sep 9, 2010
86
0
0
amd isn't dominating anything, but they are certainly much more competitive in the discreet gpu market than they were a year ago. if amd wants to "dominate" then they should roll out a competitive software suite with 6xxx; you know, something less like CCC and more like nvidia's. I don't agree with the whole "amd drivers suck and nvidia drivers rock" argument, but overall software support nvidia is clearly superior. In fact, many gamers this gen will probably hold out for nvidia again and/or pay more for a louder/hotter/slower card b/c they have a better overally ownership experience with nvidia. amd needs to spend some $$ and get their gpu marketing dept more in line with their gpu manufacturing.

A year ago? September 2009 AMD released the HD 5870, also a year prior that, AMD was very competitive with their HD 4x00 series which forced nVidia to drop the prices of their GT200 series instantly, if you mean 2007 you are right then.

In overall, I understand Grooveriding point of view, AMD is designing videocards with GPGPU functionality, they don't have the urge yet because they have a CPU market to take care of general computing, their vision of GPGPU computing is making a platform with CPU's and GPU's working in tandem for a far more better balanced solution, CPU's flexibility for serial code performance, branchy code and workload assignment for the GPU which is a huge parallel code monster (GPU's are notoriously bad for serial code, dependant and branchy code).

nVidia did a Tesla card with gaming functionality, trying to cover a broader market with all in one solution because they don't have a CPU market.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Very true.

Although you can compare the final products success and that is what the engineering is about.

In this case, the end objective is to sell and make money out of the product.

Actually that is entirely in the hands of the executive team, the marketing dept, and the accounting dept.

The set the engineering objectives and constraints, define the market opportunity, set the budget and timeline, etc.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
amd isn't dominating anything, but they are certainly much more competitive in the discreet gpu market than they were a year ago. if amd wants to "dominate" then they should roll out a competitive software suite with 6xxx; you know, something less like CCC and more like nvidia's. I don't agree with the whole "amd drivers suck and nvidia drivers rock" argument, but overall software support nvidia is clearly superior. In fact, many gamers this gen will probably hold out for nvidia again and/or pay more for a louder/hotter/slower card b/c they have a better overally ownership experience with nvidia. amd needs to spend some $$ and get their gpu marketing dept more in line with their gpu manufacturing.

I know there are arguments for and against using the Steam hardware survey to gauge the installed base of video cards.

I personally am for it and have yet to see a compelling argument that can display why there would be a huge variance from what the steam survey shows vs the overall picture. I think the Steam survey definitely takes a large enough sampling to give a somewhat valid picture of things.

It shows AMD with 88% of the installed base of DX11 cards, NV with 12%. In the most recent hardware market, I think AMD has been doing better with pushing out cards to consumers than NV.

I 100% agree about software. I haven't had the stability issues people will remark with AMD drivers. I've used single and dual AMD card configs, they worked just fine, were stable in terms of being functional and performing as they should.

That said, NV's drivers are definitely more user friendly, have a much more intuitive and pleasant UI, which is where I think they get a lot of their praise from, how useful that is, is subjective.

But, the built-in game profiles are a huge plus for NV drivers, their SLI scaling vs crossfire is much better as well. I don't understand how much of SLI/crossfire scaling is at the whim of drivers or hardware, but I assume the drivers definitely play a large part of it.

It also seems like they think of a lot of what you would need in their drivers. For example with a system I have connected to an HDTV via an Nvidia card they allow the option to choose the display type you want the card to report to your display. This eliminated a problem I had previously with the TV thinking it was receiving a PC signal rather than a video signal and not allowing me to apply PQ settings.

If AMD could bring their software suite up to the level their hardware is at currently, they would have a solid lock in my opinion. For some people their experience and feelings over drivers seem to be a deal-breaker and this could gain them that market segment.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
Although you can compare the final products success and that is what the engineering is about.

I'm not too sure about that.
For example, the Pentium 4 wasn't an engineering success. It was plagued with leakage issues etc.
However, it still completely demolished its competitors in terms of market sales, so it was a very successful product, much more successful than any of its competitors.
Two main reasons for this success were:
1) The Pentium 4 piggy-backed on the strong names of Intel and the Pentium brand.
2) It only had two competitors: Pentium 3 and Athlon. Intel simply stopped selling the Pentium 4, and AMD, maxing out its production capacity completely, could only supply about 30% of the market.

On the other hand, the Commodore Amiga, with superior hardware and software at the time, eventually didn't make it. People still bought IBM and Microsoft, because that's what they knew.

So success in terms of sales or anything is not really an indication of engineering. It's more about marketing.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
If that is the case that makes sense then that GF100 has been not been too successful in the gaming market if they are designing for gamers as an afterthought now

Why does it have to be an afterthought?
It's all a balancing act. It's a multipurpose chip design. It's not two separate sections of a chip, one for GPGPU and the other for graphics... No, it's one set of unified shaders, which can be applied either way. I would still say that if you were to use the word 'afterthought', then GPGPU is where it applies, as first and foremost, the architecture is still designed for graphics, not for general computing.

The lines between GPGPU and gaming are blurring anyway, with games using DirectCompute, Cuda, OpenCL and PhysX.
If this trend continues, we may need stop thinking about videocard performance as 'graphics' or 'GPGPU', as the fastest card will be the card with the best combination of the two, depending on the game demands.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
So success in terms of sales or anything is not really an indication of engineering. It's more about marketing.

Its about everything.

Different cases have different scenarios.

Lets look at the 4800 vs GTX 2xx and 5800/5900 vs GTX 4xx

In one scenario, the cards were release at similar times (within a few days/couple of weeks) of each other. The aggressive prices of AMD were met with aggressive price reductions by NVIDIA.

Now we have a situation where NVIDIA aggressively priced the GTX 470 and the GTX 460 and didn't see any or just a small reaction by AMD (especially on the 5850 front).

They are competing products - someone either buys a 470 or a 5850/5870 to put in their pci-e slot, so a sale from a company is a potential lost sale from another (excluding some cases where people simply disregard a vendor for a reason or another).

So the fact the GTX 470 went down in price so sharply, with barely any movement from the AMD cards seem to imply it isn't selling well.

Why is that?

Performance is good.

The only variables is late to market and power consumption - those look quite related to engineering (either TSMC or NVIDIA or both) decisions.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
So the fact the GTX 470 went down in price so sharply, with barely any movement from the AMD cards seem to imply it isn't selling well.

Why is that?

Performance is good.

The only variables is late to market and power consumption - those look quite related to engineering (either TSMC or NVIDIA or both) decisions.

I think supply problems may play a big role here.
The demand for AMD's cards appears to be much larger than the supply.
So if they lower the prices, they only cut in their own profit margins. They will sell out the complete supply anyway, even with these higher prices. So why bother lowering them?
 
Sep 9, 2010
86
0
0
I think supply problems may play a big role here.
The demand for AMD's cards appears to be much larger than the supply.
So if they lower the prices, they only cut in their own profit margins. They will sell out the complete supply anyway, even with these higher prices. So why bother lowering them?

Makes a lot of sense. After all, both, AMD and nVidia are about profits. :)
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Makes a lot of sense. After all, both, AMD and nVidia are about profits. :)

Yea, if it's true that they sell their complete production... then it's much like the Pentium 4 vs Athlon scenario I mentioned earlier.
You cannot grab more marketshare when you simply don't have the products to sell. AMD has only raised prices since they introduced the 5000-series, so that is a strong indication that demand is much bigger than supply.
Normally prices only go down on chips over time, because as production matures, yields increase, production costs go down (scaling up), and more return-on-investment has been made.
Price increases are VERY rare, especially on GPUs and CPUs. Often you get large pricecuts every few months.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
Yea, if it's true that they sell their complete production... then it's much like the Pentium 4 vs Athlon scenario I mentioned earlier.
You cannot grab more marketshare when you simply don't have the products to sell. AMD has only raised prices since they introduced the 5000-series, so that is a strong indication that demand is much bigger than supply.

Except Intel didn't drop prices of their P4 products.

NVIDIA dropped prices - so that says something too.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Except Intel didn't drop prices of their P4 products.

Intel didn't have to drop prices because AMD could only supply about 30% of the market.
The other 70% would buy Intel by default.
They actually did undercut AMD's prices with the Pentium D however.

NVIDIA dropped prices - so that says something too.

It says that they're aggressively trying to regain marketshare.
The consumer is the winner here, both the GTX460 and GTX470 are now excellent value for money.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
Why does it have to be an afterthought?
It's all a balancing act. It's a multipurpose chip design. It's not two separate sections of a chip, one for GPGPU and the other for graphics... No, it's one set of unified shaders, which can be applied either way. I would still say that if you were to use the word 'afterthought', then GPGPU is where it applies, as first and foremost, the architecture is still designed for graphics, not for general computing.

The lines between GPGPU and gaming are blurring anyway, with games using DirectCompute, Cuda, OpenCL and PhysX.
If this trend continues, we may need stop thinking about videocard performance as 'graphics' or 'GPGPU', as the fastest card will be the card with the best combination of the two, depending on the game demands.

Well that would be where Fermi's failures lie. Something that's designed to perform two functions in most cases will always make sacrifices where a product dedicated to one function will not.

Regardless, it was not my statement that GF100 was designed foremost as an HPC product, that was BenSkywalker's, but I could see the validity of his statement in the shortcomings of GF100 as a gaming GPU.

DirectCompute, Cuda, OpenCL and PhysX are functions that have been available for years now. There is one game that makes use of Cuda and about 10 games that use gpu physx. These features are non-starters and are marketing bullet points, not future technology we can expect to see adopted in games going forward. But this argument has been done to death in many a thread here.

Once DX12 or DX13 implements physics running on the GPU we'll have something real that will gain traction.

I highly doubt or at least hope we will never see the time where a card's performance and value is not measured by its ability to deliver framerates at a higher rate than another but rather by its ability to run proprietary features.

I'd rather features be left as an heterogenous implementation available to all comers who want to design video cards, not dictated by one company trying to corner the market. Fortunately from the looks of the reception physx and cuda have received in the gaming market, that looks to be what the market wants as well.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Well that would be where Fermi's failures lie. Something that's designed to perform two functions in most cases will always make sacrifices where a product dedicated to one function will not.

And I would disagree with you on that.
Advances in API design (mostly pushed by nVidia's pioneering work with G80/Cuda) have made GPGPU a requirement in your hardware now (for both OpenCL and DX11).
So you're going to have to design hardware to perform both functions anyway.
I think especially GF104 has a better balance between graphics and gpgpu than AMD's offerings.

DirectCompute, Cuda, OpenCL and PhysX are functions that have been available for years now.

DirectCompute has been around as long as DX11, so little under a year. OpenCL was introduced even later.

These features are non-starters and are marketing bullet points, not future technology we can expect to see adopted in games going forward.

And again I disagree with you on that.
Especially with DirectCompute, it is actually part of the DX11 API, and is being used in various DX11 games, for graphics effects such as SSAO (and used by AMD to promote their DX11 parts).
Why would that not be adopted by more games in the future?

Once DX12 or DX13 implements physics running on the GPU we'll have something real that will gain traction.

DX is never going to implement physics, because DX is an API, not middleware.
With DirectCompute, any company can step in *right now* and build physics middleware for DX11+.
You may want to ask yourself why that isn't happening.

I highly doubt or at least hope we will never see the time where a card's performance and value is not measured by its ability to deliver framerates at a higher rate than another but rather by its ability to run proprietary features.

Now when did 'proprietary' enter the discussion? DirectCompute and OpenCL are not proprietary in the first place, and although PhysX is proprietary now, it is highly likely that either PhysX starts supporting DirectCompute or OpenCL in the future, or another solution pops up in its place (probably either Havok or Bullet at this point).
Now, with the 'proprietary' removed from the discussion once again (thank god, I hate it when people bring politics into a technical discussion), we can once again conclude that GPGPU will be playing a role in "the ability to deliver framerates" in the future.
 
Last edited:

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
And I would disagree with you on that.
Advances in API design (mostly pushed by nVidia's pioneering work with G80/Cuda) have made GPGPU a requirement in your hardware now (for both OpenCL and DX11).
So you're going to have to design hardware to perform both functions anyway.
I think especially GF104 has a better balance between graphics and gpgpu than AMD's offerings.



DirectCompute has been around as long as DX11, so little under a year. OpenCL was introduced even later.



And again I disagree with you on that.
Especially with DirectCompute, it is actually part of the DX11 API, and is being used in various DX11 games, for graphics effects such as SSAO (and used by AMD to promote their DX11 parts).
Why would that not be adopted by more games in the future?



DX is never going to implement physics, because DX is an API, not middleware.
With DirectCompute, any company can step in *right now* and build physics middleware for DX11+.
You may want to ask yourself why that isn't happening.



Now when did 'proprietary' enter the discussion? DirectCompute and OpenCL are not proprietary in the first place, and although PhysX is proprietary now, it is highly likely that either PhysX starts supporting DirectCompute or OpenCL in the future, or another solution pops up in its place (probably either Havok or Bullet at this point).
Now, with the 'proprietary' removed from the discussion once again (thank god, I hate it when people bring politics into a technical discussion), we can once again conclude that GPGPU will be playing a role in "the ability to deliver framerates" in the future.

I copied and pasted your line where you listed those features, but was speaking to physx and cuda which I carried on with in that sentence referring to features that are not going anywhere.

I wasn't aware that DirectCompute and OpenCL could be used to offer features like running in game physics on the video card, thank you for that information. Hopefully things will start to evolve using those standards then.

I don't think discussing proprietary features is really a political discussion ?

DirectCompute and OpenCL being universal and offering these abilities on both platforms is actually exciting.

I look forward to seeing the Radeon 6 series capabilities with those features in its upcoming launch!
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
It says that they're aggressively trying to regain marketshare.
The consumer is the winner here, both the GTX460 and GTX470 are now excellent value for money.

But since there is no response by AMD that seems to imply NVIDIA isn't undercutting AMD sales or price. And that is why NVIDIA keep dropping prices.

After all, if AMD wasn't able to sell their 5850s/5870s because of the GTX460/GTX470, they would have to drop price - supply and demand.

If AMD is supply constrained, there is really no incentive for NVIDIA to drop the prices unless it is because people go into a shop and buy a 5850/5870 instead of GTX460/470 meaning NVIDIA has cards that are sitting in shelves and not moving.