Farcry 2 results: GF100 Vs. GTX285

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

T2k

Golden Member
Feb 24, 2004
1,665
5
81
Professional what? ;)

An engineer involved in one or more of these (hardware/graphics/3D/etc) fields, regardless of which particular one I'm in.

We'll have to excuse you anyway. Forums aren't really your strength in demonstrating congeniality.

I think I'm not following you: what exactly I should've been demonstrating here...?

Double edged sword you're weilding there though. On one hand, in this forum, it has been said to the point of exhaustion that 99.999999% of customers buying a graphics card has no use for a GPGPU other than gaming or the occasional video encode. They could care less about all the other stuff it could do. So why would they care about the companies die size or their profits?

Short answer. They really wouldn't.

Look dude. If a graphics card company delivers competitive performers for competitive prices, what exactly about your "professionalism" persuades you to care how big the die is or how much profit, or lack thereof, a graphics card company makes?

Frankly, I'm stymied. :)
or
Stymie, I'm Franklied. :eek:

Look, "dude", don't confuse your own life & scope with the interests of (professional) others - BTW professional != professionalism -
it's your problem if you, despite hanging out here all the time and spreading the "word" you're tasked with, still have no real interest in this subject...

...or perhaps it's the intiative you got from the mothership? ;)
I happen to be a very lucky person because as the technical head of a 3D studio - shall remain anonymous ;) - my job is somehow my hobby as well. Yeah, I'm lucky and you're not. Life is a bitch, I know.
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
T2k, I didn't even compare with the ATI numbers, because one game can't prove anything. But I compared the 84fps number supposedly for the GTX360 vs. 51fps for the GTX285, 64% difference. That's a nice jump in performance.

Fair enough. I see a lot of BS numbers flying here but if we look closer it's more and more obvious that this supposed GTX360 will not be able to beat the 5870 unambiguously, rather like here and there and that does not bode too well for NV and for us either: a full-blown GTX380 will be needed to beat the 5870 but NV will try to launch it priced well above which means my imagined cheap second 5870 won't happen...
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
agreed. price and power are just as important as performance, if not more.

For midrange yes....for HIGH-end is like saying you should buy a Lamborghini based on gas/milage ;)

Just dosn't make sense.
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
There's no comparison. Whispers tell that GF100 is in a class of its own in arch and efficiency, until Northern Islands arrives from AMD in 2011. Take them as you want.
 

at80eighty

Senior member
Jun 28, 2004
458
5
81
Then again I wonder what kind of results you could get if you tossed one GTX380 with a 5970 under a Lucid Hydra setup
 
Feb 19, 2009
10,457
10
76
Here's what we do know, the number of transistors, the die size, and the power draw and clocks of the 448 sp variant, which presumably is the 360. The 512 sp variant is presumed to be the 380, only speculation on the clocks no real nv released numbers.

There's plenty of analysis out there, but the ballpark figure is it costs nv ~$200 per gf100 die to make, and costs ATI under $100. So a gf100 card is at least $100 more expensive to make already. Then factor in the 384bit bus, high power and thermals, the pcb and components end up more expensive. nV selling a 360 at $500 means they make a loss, at least $550 is required. Would you buy a 360 over a 5870 if it had similar performance but costs $150 more?

I can't imagine many people would. I think ultimately NV is going to sell the consumer cards at a loss just to compete with ATI and maintain market share/brand name awareness. They are expecting to regain $$ from the lucrative HPC market. As a strategy, its not complete loss.

But i don't see why they did not design an architecture purely focus for HPC and sell that instead. Then release a consumer card purely focus on gaming. Both fields require intrinsically different specialized hardware. If you make something "general", its a jack of all trades, master of none.

As for ATI not releasing HPC GPU, some of the fastest super computers in the world have 4870x2 powering them. From what i understand, not every HPC application depend heavily on double floating point precision, but just single. In that category, ATI has no challenger.
 

Kuzi

Senior member
Sep 16, 2007
572
0
0
There's plenty of analysis out there, but the ballpark figure is it costs nv ~$200 per gf100 die to make, and costs ATI under $100. So a gf100 card is at least $100 more expensive to make already. Then factor in the 384bit bus, high power and thermals, the pcb and components end up more expensive. nV selling a 360 at $500 means they make a loss, at least $550 is required. Would you buy a 360 over a 5870 if it had similar performance but costs $150 more?

I can't imagine many people would. I think ultimately NV is going to sell the consumer cards at a loss just to compete with ATI and maintain market share/brand name awareness. They are expecting to regain $$ from the lucrative HPC market. As a strategy, its not complete loss.

I see what you mean, but I don't believe selling the GTX360 for $500 would mean a lose for Nvidia, just probably not much profit at the current yields.

From the architecture design of Fermi, it seems Nvidia kind of gave away the desktop market to ATI this round. I'm not saying they won't be able to compete, but just that the price/performance will be in ATI's favor again. It's like a repeat of the GT200 and RV770, but the difference this time is Nvidia will be very late to the party.

But as you said, Nvidia will make up for that by selling expensive Tesla cards for Supercomputing/Data Center Servers/Workstations etc.

But i don't see why they did not design an architecture purely focus for HPC and sell that instead. Then release a consumer card purely focus on gaming. Both fields require intrinsically different specialized hardware. If you make something "general", its a jack of all trades, master of none.

Because a gaming GPU inherently handles HPC tasks very well, all that's needed are some minor adjustments for the architecture to "activate" more performance, and the right drivers of course.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
I see what you mean, but I don't believe selling the GTX360 for $500 would mean a lose for Nvidia, just probably not much profit at the current yields.

This is where people are mistaken. Multiple factors in the cost of the GPU, not just die size/pcb etc... A price point of 500$ means they must sell a ton of HPC cards to regain what was sunk into the GPU.

R&D is not just a sunk "soft" cost, it cost real dollars to keep people employed.
Marketing, Nvidia has to market the card. This cost the company money.
Shipping, packaging, accessories and only at 20% yield will drive the cost of the product up.
 
Feb 19, 2009
10,457
10
76
See, if they didn't stick so hard to the design of one giant monolith gpu that is a generalist.. and focus on making 2 smaller, very specific gpus, they wouldnt have to be in this situation whereby they are selling to consumers at a loss and regaining $$ by selling to the HPC crowd.
They would be making $$ in both sectors.

I find it amusing that Jen have been recently selling off so much nv shares, in the month of december alone, something silly like $38M US of his own shares. All while keeping the lid quiet on fermi, keep the spin going to hype things up.

It will be funny if the HPC crowd decides to go with the 5k series as well.. given how efficient it is for the price and TDP. More than 3x the single floating point precision performance than fermi, about .2x less in double, doesnt seem that bad.
 

polenta

Junior Member
Jan 16, 2010
1
0
0
So when Fermi comes out in 08/09 2010 we'll have also Ati new 6xxx versions:

HD 6850 vs. GTX 340 (?)
HD 6870 vs. GTX 360
? vs. GTX 380
HD 6970 vs. GTX 395 (?)

let's wait and see.

//
Any Ati HD 6xxx news?
 
Last edited:

Rezist

Senior member
Jun 20, 2009
726
0
71
I know ATi has really good SP numbers but I didn't think anyone was using them due to no one having development software to utilize them.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
56
91
See, if they didn't stick so hard to the design of one giant monolith gpu that is a generalist.. and focus on making 2 smaller, very specific gpus, they wouldnt have to be in this situation whereby they are selling to consumers at a loss and regaining $$ by selling to the HPC crowd.
They would be making $$ in both sectors.

I find it amusing that Jen have been recently selling off so much nv shares, in the month of december alone, something silly like $38M US of his own shares. All while keeping the lid quiet on fermi, keep the spin going to hype things up.

It will be funny if the HPC crowd decides to go with the 5k series as well.. given how efficient it is for the price and TDP. More than 3x the single floating point precision performance than fermi, about .2x less in double, doesnt seem that bad.

Couldn't have anything to do with this, right?
http://www.nrc.org/press/PRshow.html?id=3676

Note that it states "The Jen Hsun Huang" School of Engineering Center.
Not "The Nvidia" School of Engineering Center.
He is using his own money to donate to his Alma Mater. Stanford.
He pledged at least 30 million dollars of his own money for the construction.
Due to be completed in 1st half of 2010.
 

Kuzi

Senior member
Sep 16, 2007
572
0
0
So when Fermi comes out in 08/09 2010 we'll have also Ati new 6xxx versions:

HD 6850 vs. GTX 340 (?)
HD 6870 vs. GTX 360
? vs. GTX 380
HD 6970 vs. GTX 395 (?)

let's wait and see.

//
Any Ati HD 6xxx news?

Fermi is rumored to release in March/April time frame.

The 6xxx series from ATI will very likely be produced on a 28nm process. I think these will be made by AMD/GlobalFoundries, but the 28nm process may not be ready till maybe Q3 or Q4 this year. And usually companies test a new process with low end chips, then produce the high end ones. Which could push the high end 6xxx series release to late Q4 or early 2011.
 

akugami

Diamond Member
Feb 14, 2005
6,210
2,552
136
Why exactly is this?

I mean.. In the world I'm used to it is always easier to make something smaller than bigger. What about the latest Nvidia cards makes it so hard to cut down?

I was under the impression that they didn't bother cutting down the gtx200s because of how strong the 92b's were already in that market. Not really because it would have been technically challenging, just pointless to release a GTX250 that performed within 3% of a 9800gtx given the cost of more R&D.

This certainly paints a bleak picture of current size:performance ratios for Nvidia, but there is no reason we can't see something nice in that market eventually.. though they may have to improve area:perf if it is to be cost effective for them.

While no one should question the performance of the 8800/9800 series, it is also not a question that the GT200 cores are great performers. If they could have cut the size down they would have. I can't tell you exactly why, I'm not an engineer but it's likely due to the design of the GPU cores themselves. AMD and nVidia were aiming for different goals.

With how competitive the Radeon 4 series was mainly from a bang for the buck perspective. If nVidia could have put out a lower end GT200 GPU, they probably could have stomped on AMD in the last generation instead of letting AMD get any type of positive buzz.

I think we'll have something of a repeat this generation with the GT200 cores relegated to the lower end $200 and under segment while the GT300 cores are the $300 and up.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
With how competitive the Radeon 4 series was mainly from a bang for the buck perspective. If nVidia could have put out a lower end GT200 GPU, they probably could have stomped on AMD in the last generation instead of letting AMD get any type of positive buzz.

This is where you are misguided. It takes 4-5 years to make a new GPU, it only takes a year or so to modify a GPU design. Nvidia has not designed a new GPU from ground up since the 6xxx series. They have made tweaks and modifications on each new process. This is similar to ATI and how a real new design GPU since R420. Some people say that the R600 was a complete redesign but it has too many similarities to that of R420. Yes additions like unified shader, hardware tessellation and so fourth constitute as major enhancements but I would not consider that a complete overhaul.

Fermi to my knowledge, is by far the most variant of all GPU's we have seen in a few years. That is why Nvidia could not have created a smaller faster variant then ATI as the R&D went into Fermi.
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
Couldn't have anything to do with this, right?
http://www.nrc.org/press/PRshow.html?id=3676

Note that it states "The Jen Hsun Huang" School of Engineering Center.
Not "The Nvidia" School of Engineering Center.
He is using his own money to donate to his Alma Mater. Stanford.
He pledged at least 30 million dollars of his own money for the construction.
Due to be completed in 1st half of 2010.

Jesus Christ, Huang is a bigger douche-bag than I thought... classic, truly classic - I wonder what kind of issues he's fighting inside...:twisted:
 

yh125d

Diamond Member
Dec 23, 2006
6,886
0
76
I think we'll have something of a repeat this generation with the GT200 cores relegated to the lower end $200 and under segment while the GT300 cores are the $300 and up.

I'm not sure about the GT200 making up the middle-low end. It would have to be 40nm to be competitive on price/power, and I don't know if that can be done, since it's already been on 65 and 55nm (apparently being able to be manufactured on 3 different processes is rare? I've heard this somewhere before)
 

T2k

Golden Member
Feb 24, 2004
1,665
5
81
I'm not sure about the GT200 making up the middle-low end. It would have to be 40nm to be competitive on price/power, and I don't know if that can be done, since it's already been on 65 and 55nm (apparently being able to be manufactured on 3 different processes is rare? I've heard this somewhere before)

Once they sorted out all the issues they switch few teams to prepare the cut-down Fermi derivatives... I bet they are readying an all-out assault for the Spring (Apr-May), bringing out all Fermi-derivatives, down to the $100-level.
 

Daedalus685

Golden Member
Nov 12, 2009
1,386
1
0
While no one should question the performance of the 8800/9800 series, it is also not a question that the GT200 cores are great performers. If they could have cut the size down they would have. I can't tell you exactly why, I'm not an engineer but it's likely due to the design of the GPU cores themselves. AMD and nVidia were aiming for different goals.

With how competitive the Radeon 4 series was mainly from a bang for the buck perspective. If nVidia could have put out a lower end GT200 GPU, they probably could have stomped on AMD in the last generation instead of letting AMD get any type of positive buzz.

I think we'll have something of a repeat this generation with the GT200 cores relegated to the lower end $200 and under segment while the GT300 cores are the $300 and up.

We did eventually see cut down gt200s... but they had a much worse performance/die area ratio. Thus I assumed that it was not a technical issue that delayed cards such as the 240.. but the fact that they could not have been profitable on anything but 40nm, and even then was pushing it.



Something I think some are forgetting is also that Nvidia makes a fair chunk less than the MSRP on each card they make. Unless they manged to get BFG/EVGA to sign a pledge to never make a profit. I'm not sure the margins on these things mind you.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Fair enough. I see a lot of BS numbers flying here but if we look closer it's more and more obvious that this supposed GTX360 will not be able to beat the 5870 unambiguously

It's unfortunate that one of the best tech sites in the world has a forum littered with this kind of trash. YOU HAVE NO IDEA IF THIS IS TRUE. You are just guessing - more like hoping - and stating it as fact.

IN FACT, if the benchmarks put forth are indeed accurate, then it is more than likely the gtx360 will perform higher than the 5870. Your spins have gotten extremely old.
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
It's unfortunate that one of the best tech sites in the world has a forum littered with this kind of trash. YOU HAVE NO IDEA IF THIS IS TRUE. You are just guessing - more like hoping - and stating it as fact.

IN FACT, if the benchmarks put forth are indeed accurate, then it is more than likely the gtx360 will perform higher than the 5870. Your spins have gotten extremely old.

This whole thread is one big if. IF... nvidia could put some cards on shelves - life would be better. Anyways the in house benchmarks I have seen are mixed.
 

Daedalus685

Golden Member
Nov 12, 2009
1,386
1
0
It's unfortunate that one of the best tech sites in the world has a forum littered with this kind of trash. YOU HAVE NO IDEA IF THIS IS TRUE. You are just guessing - more like hoping - and stating it as fact.

IN FACT, if the benchmarks put forth are indeed accurate, then it is more than likely the gtx360 will perform higher than the 5870. Your spins have gotten extremely old.

While he was a bit on the extreme side, he is more or less correct in this case.

The benchmark floating around of the 360 (assuming that is what it is) was done on the small farm map, not at all the same kind of test most review sites show for FC2. The sites that list the test as the same map show numbers for the 5870 almost identical to what we saw in those leaks. (PC perspective shows 83.5 for the 5870 in that map.. though keep in mind they also peg the 285 at 60+fps for the same map) Though who knows if this was an overclocked card, or a relatively slow engineering sample.

As you said we can't know for sure for some time.. try a bit less hypocrisy next time before you decry someone's spin with spin..
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
This is where you are misguided. Nvidia has not designed a new GPU from ground up since the 6xxx series. They have made tweaks and modifications on each new process. This is similar to ATI and how a real new design GPU since R420. Some people say that the R600 was a complete redesign but it has too many similarities to that of R420. Yes additions like unified shader, hardware tessellation and so fourth constitute as major enhancements but I would not consider that a complete overhaul.
Most people would agree that 7 series was an evolution of 6 series. G80 was a brand new architectural design. From this perspective a GT200 is similar to what 7 series was to the 6 series (G80). To say that since GeForce 6, NV hasn't had a complete redesign of a GPU is incorrect. Unified shader architecture was a path towards complex shaders as opposed to having a fixed pipeline properties. If you don't consider G80 a new GPU from the group up, then I don't really know what you would consider a new generation. Similarly, on ATI front, the major change occurred starting with the R500 (Xbox360 variant GPU). It is in many ways the precursor to the R600 desktop PC graphics card series. The R600 was the foundation of the Radeon HD 2000/3000 series, which was a complete redesign from R520 days. Again to claim that R420/520/R600/RV770 and the current chip are just "extensions" of R420 is incorrect.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
It means they wont be selling 360 at $400 or 380 at $500. They will most likely restrict sales to consumers, very low volume if they are going to be making a huge loss, and focus on HPC market with tesla.

It means this generation, ATI will have complete dominance, no competition = bad for us all.

nV's refusal to split their architecture, have two different chips, 1 for HPC servers and 1 for consumers means the GF100 core is bloated with features neither market uses but are paying for. Doesnt add up when your main competitor is going so lean and mean.

Edit: Better? If you equate to being hotter, using more power for similar performance.. funny indeed.

You think it would be wiser to manufacture two distinct chips for the market? One for HPC and one for the game market? Why then hasnt Nvidia done this? And why do Nvidia margins destroy ATI margins if this is such an unwise business decision? Ill tell you why. Because it is cheaper to build a single GPU and target it via drivers and board specifications than to have two teams design two chips and then run those two chips through an already constrained company like TMSC.

Intel doesnt do what you recommend and neither does AMD on the CPU side. What is so special about GPU's you think it should be done?