Fermi based Tesla will be available in Q2 2010.

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
I've been drawb to attention that a Fused add multiply counts as 2 Flops per cycle.

So in fact, its 16*16*2, and that would put the shader core clock at 1.2-1.3GHz.

Is this information valid?

Actually it is, don't know why that didn't click with me. That sounds more reasonable too, while 2GHZ is a lot more realistic then 3GHZ, something in the 1.2-1.3GHZ range makes a lot more sense given the issues with 40nm atm.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
did, did you just make a vieled meatspin quip? :biggrin: lol!

(lord I apologize for that one right there now, thats not right, lord I apologizes for that)
 

Painman

Diamond Member
Feb 27, 2000
3,728
29
86
did, did you just make a vieled meatspin quip? :biggrin: lol!

(lord I apologize for that one right there now, thats not right, lord I apologizes for that)

What happens in L&R... Stays in L&R. ():)
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91

Why should GF100 be compared to Hemlock if the performance disparity is expected to be so large in a key metric of relevance?

Is GF100 expected to cost $600 like Hemlock does?

So far, we are comparing GF100 to Cypress. Where, in reality, GF100 should be compared to Hemlock. 4.64 TFlops vs. 1.26 TFlops is not much of a comparison at all, however, CF limitations, ATI's less efficient shaders aside.

I understand nothing prevents one from making the comparison, I just don't get the preconditioning statement "should" in that sentence.

One could compare a really large number to a really small number any time of the day, but why should one do it?
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Why should GF100 be compared to Hemlock if the performance disparity is expected to be so large in a key metric of relevance?

Is GF100 expected to cost $600 like Hemlock does?



I understand nothing prevents one from making the comparison, I just don't get the preconditioning statement "should" in that sentence.

One could compare a really large number to a really small number any time of the day, but why should one do it?

It seems like they were trying to get an idea on the Fermi's GeForce part MSRP based on the Fermi Tesla part MSRP. I'm not too sure how how close that'll turn out to be... but based on the $3999 Fermi based Tesla part it appears they feel the GeForce part won't be cheap.

Then comes the price. A previous-gen C1060 released at $1699, falling to $1199. Compare this with fellow Geforce model, GTX 280, released at $649, falling quick to $500, and finally $300. The price of a next-gen C2070 is a whopping $3999. Nearly double the price as the previous generation C1060. Clearly, these are expensive products to make, so how much can Nvidia sell a Geforce version of Fermi for? Even the cheapest Tesla 20 variant, the C2050 costs $2499, nearly 50% more than the GT200 based C1060 flagship. Can Nvidia sell the $3999 Tesla product at $399 as a Geforce product
 

lopri

Elite Member
Jul 27, 2002
13,310
687
126
Actually it is, don't know why that didn't click with me. That sounds more reasonable too, while 2GHZ is a lot more realistic then 3GHZ, something in the 1.2-1.3GHZ range makes a lot more sense given the issues with 40nm atm.

Well so the math that came up with 3GHz wasn't too different from yours, after all. ^_^
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
Why should GF100 be compared to Hemlock if the performance disparity is expected to be so large in a key metric of relevance?

Is GF100 expected to cost $600 like Hemlock does?



I understand nothing prevents one from making the comparison, I just don't get the preconditioning statement "should" in that sentence.

One could compare a really large number to a really small number any time of the day, but why should one do it?

I think the interest in here is to guess the shader core speed.

Additionally we have last gen shader count and speed.

Sure, Fermi is a new architecture and blah di blah, so we will have to wait and see how it does in game.

But if last gen was 800 vs 256 and now it is 1600 vs 512 and the clocks are similar, that can possibly (all guesswork that can be completely wrong) indicate that Fermi isn't going to outperform the 5870 by that much.

Then we have xtor counts and estimated sizes. Those seem to point that Cypress is cheaper to make than Fermi.

Yep, loads of guesswork that in the end can amount to nothing.

On the other hand if Fermi shader clock was 3GHz, assuming Fermi would spank Cypress in current games wouldn't be farfetched.

Guess we can compare this to the guesswork of Cypress performance a few months ago when we start seeing specks that pretty much doubled the 4870.
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
I think the interest in here is to guess the shader core speed.

Additionally we have last gen shader count and speed.

Sure, Fermi is a new architecture and blah di blah, so we will have to wait and see how it does in game.

But if last gen was 800 vs 256 and now it is 1600 vs 512 and the clocks are similar, that can possibly (all guesswork that can be completely wrong) indicate that Fermi isn't going to outperform the 5870 by that much.

Then we have xtor counts and estimated sizes. Those seem to point that Cypress is cheaper to make than Fermi.

Yep, loads of guesswork that in the end can amount to nothing.

On the other hand if Fermi shader clock was 3GHz, assuming Fermi would spank Cypress in current games wouldn't be farfetched.

Guess we can compare this to the guesswork of Cypress performance a few months ago when we start seeing specks that pretty much doubled the 4870.

It was 800 vs 240 so 1600 vs 512 is a little better ratio for NV.
Shader clock is very important for Fermi to stay competitive. Only time will tell if 1.2GHz will be enough.
 

ronnn

Diamond Member
May 22, 2003
3,918
0
71
Charlie was certainly wrong!! Still waiting for those who care to recant their opinion of his sources.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
It was 800 vs 240 so 1600 vs 512 is a little better ratio for NV.
Shader clock is very important for Fermi to stay competitive. Only time will tell if 1.2GHz will be enough.

Yea, AMD took the 4850/70/90 SP count and doubled it up. Fermi will use 2.13x as many SP's as their GTX280. But a big part of Fermi's performance will also depend on what kind of clocks Nvidia can get out of the rest of the core. And who knows, I don't think there is any kind of rule insisting that Nvidia use the same SP clock to core clock ratio.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
He was wrong on many things. Which one you have in mind?

I'm not saying he hasn't been, in fact I'm sure he has, but I don't think I can think of anything that he was blatantly, absolutly incorrect about. He has a reputation for a reason, so I assume he's earned it from not getting things correct. He goes out of his way to show Nvidia in the worst light possible. But at least on the subject of what this thread is about, I think he was the first or at least one of the first to state that Fermi was going to be delayed all the way until Febuary/March-ish.
 

lopri

Elite Member
Jul 27, 2002
13,310
687
126
It just dawned on me that these projected performance figures (520~630 GFlops) were for Tesla, which would use ECC memory. What would be the performance penalty incurred from that? Conversely, is it reasonable to think the non-ECC versions (i.e. GeForce) may have higher performance, theoretically? Theoretically, since NV will not enable ECC on gaming graphics cards for both performance and economic reasons.

In this hypothesis Fermi's performance DP performance could be indeed 750 GFlops if the performance loss from ECC memory is something like 10~20%.