Charlie's thoughts on the fermi derivative parts

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Voo

Golden Member
Feb 27, 2009
1,684
0
76
They could just not enable SLI/Crossfire on the non highend cards
Which would mean you'd have to fake the Card ID.. something which absolutely never has been done before.. except for using physix with a ati main card, using the AA nvidia only codepath,.. in the last three months ;)
 

v8envy

Platinum Member
Sep 7, 2002
2,720
0
0
You mean, like the GT240 and slower? NV is already on it.

It wouldn't surprise me to see SFR make a grand re-entrance once ATI puts sideport or its evolutionary successor back on the feature list, hopefully for NI.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
They could just not enable SLI/Crossfire on the non highend cards

What do you mean by this?

HD5750 is not a high-end card....but it is Crossfire enabled?

I have even see HD4670 Crossfired.
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
One Charlie comment I have to point out-



The 480 is selling for $500 and he's saying they can't make a profit? That is so utterly moronic it has to be called out. If a magic fairy was delivering 5830 chips to ATi they would be taking a loss on them using that same logic. A $250 price rift between GPU cost and retail is huge.

The 5830 can't lose money, as they are just using wasted chips. Compare the 480 to the 5870 for more realistic figures.

This thread is really just flamebait - as Charlie sure loves to bait the faithfull.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
The 5830 can't lose money, as they are just using wasted chips. Compare the 480 to the 5870 for more realistic figures.

This thread is really just flamebait - as Charlie sure loves to bait the faithfull.

Yep, HD5830 seems like a truly harvested bin. The math/bean counting on these selections must get really interesting. They actually had enough questionable chips where disabling half the ROPs and clocking 1120 cores @ 800 Mhz actually consumes more energy than the much higher performance HD5850.
 
Last edited:

Dark4ng3l

Diamond Member
Sep 17, 2000
5,061
1
0
Yes but they needed 5 months to pile up enough chips to release the 5830. The 5830 is a part that they don't intend to sell when yields get better as time goes on. They might cripple some good chips to make 5850s but the 5830 is a card that is going to be completely gone in 4-5 months.
 

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
Which would mean you'd have to fake the Card ID.. something which absolutely never has been done before.. except for using physix with a ati main card, using the AA nvidia only codepath,.. in the last three months ;)
ComputerBottleneck said:
What do you mean by this?

HD5750 is not a high-end card....but it is Crossfire enabled?

I have even see HD4670 Crossfired.

No what I mean is that for future generations they do not allow the low/midrange cards to SLI/Crossfire. Similar to how low end Xeons cannot multisocket.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I think they should chalk this round up to the red team and actually put a good card out next cycle.

They can too.

The thing is, they already artificially nutered FP64 performance on the current Fermi for GPGPU compute tasks. Why not throw out PhysX, CUDA and all that non-sense and sell a pure gaming card. Then include all the super GPGPU features in a standalone card for financial institutions, science applications etc. Instead of selling those cards for $1-2 grand as they always do, sell the GPGPU card for $300-400 to entice businesses to upgrade. The GPGPU card should support 3-6 monitors. Then we have a winner on both fronts. Now they are making a card with gaming capabilities that no one in the business world needs and a card with abundant GPGPU abilities that no one in the gaming world wants!
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
They can too.

The thing is, they already artificially nutered FP64 performance on the current Fermi for GPGPU compute tasks. Why not throw out PhysX, CUDA and all that non-sense and sell a pure gaming card. Then include all the super GPGPU features in a standalone card for financial institutions, science applications etc. Instead of selling those cards for $1-2 grand as they always do, sell the GPGPU card for $300-400 to entice businesses to upgrade. The GPGPU card should support 3-6 monitors. Then we have a winner on both fronts. Now they are making a card with gaming capabilities that no one in the business world needs and a card with abundant GPGPU abilities that no one in the gaming world wants!

The GPGPU card is already different to the consumer card because it has more RAM and also supports ECC. That's quite a significant difference, and possibly a rather important one if you are doing anything mission critical.

GPGPU as you seem to be considering it is also not necessarily the same as workstation class. If I am a financial institution (running calculations) why do I need 3 to 6 monitor support?
Now if I am a financial analyst, I might want a wall of monitors on my workstation card...
If I am using something like Photoshop or a video editing package with CUDA extensions, then I might want many monitors as well as GPGPU capability.
And if I am an end user I might want to be able to run PhysX and accelerate my video transcoding (to allow me to play my files on my phone, iPad, upload a smaller video to Youtube etc).
NV are trying to give everyone the opportunity to use all the features if they need them, rather than having more of an "upsell" type system when it comes to features (rather than just speed). It also increases the chance of those features being widely adopted. And gives them an edge over the competition.
Plus designing a chip which doesn't have the features would cost money, and might make lower end parts more profitable in the short term, or at least cheaper to produce, but might not be a great long term idea.

It kind of screwed over some people when Intel disabled VT on low end chips and then having Win 7 Pro XP mode need it, IIRC and they ended up enabling it on lower end chips anyway.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
If anyone thinks a company isn't turning a profit selling a video card for $500 using a chip that costs $250 they do not understand this market at all.

$250 is variable cost for the company (nevermind getting memory, PCB, etc.) What about fixed costs? Add to this R&D costs that were incurred in designing the GPU. Think about it, for a car manufacturer, you can sell each single car at a profit and still be unprofitable because once you add R&D costs, tooling, and other overhead costs, you are toast. You are just looking at "variable manufacturing cost per chip". This is technically not the total cost of GTX480.

In other words, the cost of GTX480 is equal to: (1) Fixed Costs = initial costs to make the product viable, testing costs including various respins, R&D spending, etc. + (2) Variable costs to manufacture the chip, market the chip, add PCB, memory, etc. Think of the manufacturing costs as Cost of Goods Sold + SG&A expenses for the rest (you can calculate these per card if you want). In aggregate, if you allocate the total costs involved in designing + manufacturing + marketing the graphics card, they are for more than > $250 per unit (which is just the cost to make 1 chip). In fact, the less cards you sell, the worse the portion of Fixed Costs is relative to the total cost. The article is misleading by suggesting that the total cost is $250.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
... might make lower end parts more profitable in the short term, or at least cheaper to produce, but might not be a great long term idea.

5770 doesn't have double-precision, Mercedes C-class doesn't have night vision or carbon ceramic breaks, not every Intel CPU has HT, Windows 7 comes in various versions. Every single successful company that continues to make $ always differentiates both on features and performance. Right now NV is only going to be differentiating on performance since every GTX4xx series will have PhysX, CUDA and other much useless bloat. If you want those features, pay $500 to get them for their best card. If you want to pay $200 for just graphics performance, you dont have that option at all - but ATI does have it. The result is a bloated chip for every segment. How in the world do you compete? You only add extra features when the costs of adding them is marginal (i.e., when ABS breaks and side curtain airbags became cheap enough, mostly every car shipped with them).

Soon enough top end technology features travel to the lowest parts. NVs strategy is to add most high end features to every price level right away. This makes no sense since you cant recover those costs on the lower and mid-range cards. GTX480 is just too ahead of the manufacturing process this time.
 

Magusigne

Golden Member
Nov 21, 2007
1,550
0
76
GTX480 is just too ahead of the manufacturing process this time.

Yup. Thats what I'm leaning toward. You can see it with the infared pics with the heat dumping not being adequate..

When NV does get it right. It will indeed be a beast..until then..they might need to adjust there short term buisiness strategy as ATI has been out husseling them with the last cycle or two.

Cut the fat with physx and maybe even cuda and then once the manufacturing process can be re-worked bring them back in full force.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
No what I mean is that for future generations they do not allow the low/midrange cards to SLI/Crossfire. Similar to how low end Xeons cannot multisocket.
You'd have to stop that on the physcal layer - no idea how easy/hard that'd be and you could still use a X830 GPU instead of buying the highend GPU.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
fermi was not made primary with gamers in mind, It targets cuda and the business sector (sort of like the i7 actually).
It allows a lot more in cuda then before, and I have seen a bechmark showing it to have over 8x the ray tracing performance then nvidia's last gen as a demonstration of what it can do now.
The thing is, ray tracing isn't used in any games and isn't going to any time soon... and CUDA is nice for scientists and business, but not useful for gamers as of right now.

That being said, a half fermi would be fine... nvidia might not be able to take in 200% profits like it has gotten used to, but it can still price it competitively with AMD. I don't know whats the deal with all the doom and gloom sayers.
 
Last edited:

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
You'd have to stop that on the physcal layer - no idea how easy/hard that'd be and you could still use a X830 GPU instead of buying the highend GPU.
I don't see how it could possibly be hard at all. I can disable any SLI/Crossfire configuration within seconds using a pair of scissors to cut that part of the PCB off. I wouldn't even need to take it out of the case.

A more eloquent solution would be to not design any SLI/Crossfire transistors on the microprocessor itself, saving die space and power. That would take between a few minutes and a few weeks(months?) depending on if they just want to not scribe the xtors on the silicon or design the new architecture around having that extra space.

Apparently Nvidia doesn't care if you SLI their lower end G92 cards because they didnt disable it, and ATI designed Juniper to be Crossfire compatible so all of this is moot without perfect SLI/Crossfire scaling. If we ever get perfect scaling ATI/Nvidia can determine if they want to enable or disable multi cards so its pretty pointless to debate this.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
In other words, the cost of GTX480 is equal to: (1) Fixed Costs = initial costs to make the product viable, testing costs including various respins, R&D spending, etc. + (2) Variable costs to manufacture the chip, market the chip, add PCB, memory, etc. Think of the manufacturing costs as Cost of Goods Sold + SG&A expenses for the rest (you can calculate these per card if you want). In aggregate, if you allocate the total costs involved in designing + manufacturing + marketing the graphics card, they are for more than > $250 per unit (which is just the cost to make 1 chip). In fact, the less cards you sell, the worse the portion of Fixed Costs is relative to the total cost. The article is misleading by suggesting that the total cost is $250.

When did nVidia buy TSMC? A lot of the costs you associate with a typical production based business do not apply to a fabless chip company. In this market, you take your gross sales compared to manufacturing costs to get your margins, R&D and continuing operations(which include marketting etc) are kept in seperate columns(as marketing overwhelmingly for nVidia doesn't apply to one part but to the brand). You also mix up some of what nVidia does, and some of what BFG/eVGA et al handle on their end. nVidia's business model and how it is accounted for is very straightforward.

Every single successful company that continues to make $ always differentiates both on features and performance. Right now NV is only going to be differentiating on performance since every GTX4xx series will have PhysX, CUDA and other much useless bloat.

So all the GT200/G92 and Tegra parts vanished off the face of the Earth? When did that happen? As far as 'useless bloat'- it appears that that 'useless bloat' is handing nV the HPC market gift wrapped and is also getting some developer support on the gaming side now too. Having that 'useless bloat' is only useless if they don't manage to gain market presence with it. Trying to avoid expanding your market in most businesses is considered bad. When you know for a fact two other companies are trying to push you out of your core business, not pushing back in their direction as much as you reasonably can is also bad. That 'useless bloat' is what Intel is dropping billions of dollars on to compete with nV.

NVs strategy is to add most high end features to every price level right away. This makes no sense since you cant recover those costs on the lower and mid-range cards.

You may want to call nVidia up and tell them to start giving back the billions of dollars in profits they generated over the years doing precisely that. Clearly you know of a better way to do it, perhaps you were with the brilliant businessmen that knew better at 3Dfx when they thought hardware T&L was useless bloat. Those guys were pure genius; I heard Warren Buffett is begging them to help him out with his business strategy :D

In reality, noone has ever failed in the video card market having too many features long term. It has not happened. Conversely, all of those who dragged their feet about implementing new features for too long have all gone out of business or are relegated to fractions of a percentage point marketshare at best. That is the video card market. nVidia had 32bit color support too early, they had full speed trilinear too early, they had anisotropic filtering too early, they had hardware T&L too early, they supported programmable hardware too early, now they support GPGPU too early. They end up with bigger chips then their competitors almost all of the time; they almost always run hotter and are more expensive too. To date: their strategy works. A lot of people are trying to write the 480 off as failure, too big, too hot, too fast, too much shader power, too loud etc, time will tell but given nV's track record to write them off for doing exactly what they have been doing all along seems a bit foolish. Noone is writing ATi off for being slower, having less features and going after the bargain shopper. That strategy has worked for them too, perhaps not nearly as well, but it has worked for them. Until we see a shift in the market writing off either strategy- particularly the one that has proven to be the better long term solution to date, seems foolish. Both of them have their place, they are two different approaches and so far we have seen that both can be profitable.
 

Dark4ng3l

Diamond Member
Sep 17, 2000
5,061
1
0
You have to wonder why Ben spends all his time defending nvidia. I mean you do have a point sort of and both of you are wrong just as much as you are right about the accounting stuff. At the end of the day just because nvidia subcontracts the production of its chips does not mean they don't end up paying the assiciated costs to fabbing they just pay it indirectly. And you guys need to analyse costs of projects as costs incurred by doing the project vs what not doing the project would cost. Nothing exists in a vacuum.

Anyways I hope Ben is an aspiring Focus group member because he spends too much time being nvidia's green knight.

Please refrain from baiting other members into an altercation. Discuss the subject matter and not what you think of a member for having an opinion.
Anandtech Moderator - Keysplayr
 
Last edited by a moderator:

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
Maybe its just a Ben thing, but I feel this forum has a slight red tint to it. Whenever any conversation like this comes up everyone is suddenly a marketing/engineering/programming master and Nvidia is the stupidest company in the world and anyone that thinks different is a fanboi paid by Nvidia.

I've never seen BenSkywalker use a Wreckage argument. He always provides facts saying that the market does not agree with our Anandtech "Marketing experts" who predict Nvidia is going to fall flat on its face.

Reread his last post with an open mind and point out one thing that is not 100% true or misinterpreted
 
Last edited:

golem

Senior member
Oct 6, 2000
838
3
76
You have to wonder why Ben spends all his time defending nvidia. I mean you do have a point sort of and both of you are wrong just as much as you are right about the accounting stuff. At the end of the day just because nvidia subcontracts the production of its chips does not mean they don't end up paying the assiciated costs to fabbing they just pay it indirectly. And you guys need to analyse costs of projects as costs incurred by doing the project vs what not doing the project would cost. Nothing exists in a vacuum.

Anyways I hope Ben is an aspiring Focus group member because he spends too much time being nvidia's green knight.

By the same thinking, one has to wonder why you're always attacking everyone and anyone who has anything even remotely positive to say about nvidia??
 

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
Who cares who makes more money right now, we as a consumer should only be interested in who gives us the products we need at the price we want. Period.

As for the early adoption argument presented by Ben, I agree with him, more features doesn't mean that the company will fail. I mean ATi has Eyefinity, packed Direct X 10.1 early, added hardware tessellation long ago etc. Early adoption doesn't mean bad business at all in fact I would argue that we need implementations of features like this to push the envelope in the graphics market. I admire what nVidia tried to do with Fermi but that doesn't mean I have to support it.

At the end of the day this forum has a "red tint" because right now ATi is providing us products that have better value than nVidia. That is pretty much an overall feeling not only on these forums but all over the web. They priced their products nicely and nVidia hasn't answered. I'm pretty sure if you go back to the G80 days this forum would have a "green tint." People will support companies who provide a better value, really not all that hard of a concept to grasp. Who knows maybe 6 months down the line nVidia is a people's champion in value.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Who cares who makes more money right now, we as a consumer should only be interested in who gives us the products we need at the price we want. Period.

In the abstract sense I agree with you, but there are certain factors that make it so that it is actually important to us that these companies do make money. Back when 3Dfx, Matrox, S3, ATi and nVidia were all slugging it out we had a plethora of choices and variety and were able to select a part that was the ideal match for us as consumers. Back in those days, Matrox had the stellar 2D for use in photo editing etc, 3Dfx owned the Glide market, nVidia was the only way to go for OpenGL south of $1K parts, ATi was giving the best video/HTPC performance by a long shot and S3.... well, S3 parts were cheap. Instead of spending a significant price premium buying a high end part that you only were going to use a small portion of, you were able to buy something that suited your needs rather cheaply. Now those companies are either dead or marginalized outside of the big two. If one of them has a business model that makes them incapable of making money eventually they will go under. The last thing we want is no competition at all. Look at the situation over the last six months- we have seen price/performance get significantly worse then it was a year ago at this time. That is what happens when we have no competition on the high end. Unlike a lot hypocrites who post on this board, I think any business is only being smart maximizing their profits while they can no matter if they are red or green. ATi maximizing profits at every oppurtunity allows them to remain viable.

The reason I posted in this thread is Charlie has yet again made absurd claims about the business end of operations. He is trying to convince people there is no way that nVidia can money on these parts and they have to be losing tons of money. If I were honestly interested in trying to make nVidia look good- which certainly isn't my intent- I would keep my mouth shut. The reality is that nVidia is going to rake in large profits at the $500 price point, so much so that you can say it is a good example of them taking advantage of their market position. My pointing out that nV is still going to be quite profitable with these parts when looked at from the context of a consumer should be considered a swipe at nV- but it does help assure that they are going to be around so it isn't a bad thing long term IMO.

I mean ATi has Eyefinity, packed Direct X 10.1 early, added hardware tessellation long ago etc.

For the record, I don't care for Eyefinity or 3D vision at all- either one of them, I find them both to be gimmicky and grossly overrated. Hell we had "Eyefinity" a decade ago from Matrox and I didn't like it then either. That said, you won't find me ever bashing ATi for adding it(nor nV for adding 3D Vision) because adding features is not something I ever see as a bad thing. Hell, maybe it is a feature only one person on the face of the Earth will use, I'm sure that one person is glad someone took the time to add it. The nice thing about the features from either company right now is it takes almost no effort to find the people that are using the features, pretty much all of them on a regular basis, even if it isn't those that are criticizing it. Adding new features is a good thing, doesn't matter who does it.

They priced their products nicely and nVidia hasn't answered.

No, they really didn't. ATi currently has their products priced smartly. They are taking in huge margins at the moment. As a consumer, that isn't a good thing for me personally, if the 5850 was going for $200 I would have picked one up and given my wife my old card, at over $300 when I was buying, didn't think it was worth it. As a consumer looking long term, it is very nice to see ATi taking advantage of the situation when they are capable and it will help them stay in business. If ATi didn't exist Fermi wouldn't be shipping anytime soon, nV would push it back and when it did come out it would be considerably more expensive then it is. No matter if you support team red, team green or are just a tech enthusiast, both companies remaining viable is of huge benefit to us all.
 

TemjinGold

Diamond Member
Dec 16, 2006
3,050
65
91
Which would mean you'd have to fake the Card ID.. something which absolutely never has been done before.. except for using physix with a ati main card, using the AA nvidia only codepath,.. in the last three months ;)

Err... why be so complicated? Just don't build the SLI connection on the card...

You can't SLI if you don't have anything to physically plug the SLI bridge into. Problem solved.

Also, why does everyone keep saying $250 is the total cost? The article never said that. $250 is the gpu chip. When you add all the other junk into what a card costs, it said the total was like $385. And that's LOW-BALLING.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Err... why be so complicated? Just don't build the SLI connection on the card...

You can't SLI if you don't have anything to physically plug the SLI bridge into. Problem solved.

You don't actually need a connector for SLI or Crossfire, especially at the low end.
At the high end it's usually more necessary, but certainly at the low end it's not necessary, and low end is what's in question.

Not sure if NV have a similar graph, but you can see it clearly on this one from ATI:
CF_combo_chart.jpg

Once you get below the HD4770 cards, a Crossfire connector stops being required.

With NV at least the 8600GT series didn't absolutely require one.

So just eliminating the physical connector doesn't remove the ability to do SLI/Crossfire on low end cards unless you also remove the capability on a software level.

Also there's that lovely thing called competition.
If NV remove the SLI ability from their low end or mid-range cards, and ATI don't, who benefits? In a competitive marketplace, removing features isn't really the greatest idea from a business standpoint even if it reduces costs.