No Fermi benchmarks / Price & TDP revealed

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
This TDP obsession is really retarded.
My GTX285 OC has a TDDP of ~250 Watt.
If Fermi has a TDP of 295 Watt...but ~2x the perfomance...why should I care?

Unless I am from AMD's camp, worried about performance so I start focusing on abitary sticks...

Like how hot a F1 engine gets...compared to your Prius :rolleyes:

I game with these: linky
If I can hear my PC it's my own fault...sounds like a lot of wannabegamers don't like being mainstream, but think 400watt PSU is HIGH END *ROFL*

What is next?
An Athlon X2 is a HIGH END CPU ? *ROFL*


You can tell Fermi is almost here and that it's not that impressive. The Nvidia guys are in full apologist mode.

While I think too much attention gets put on TDP/power use in general, I don't think that it's completely insignificant. The problem with the TDP is that AMD's part launched in Septempber of last year. Nvidia is looking at April of the next year, they've had a lot of extra time to really get this thing refined. We constantly hear about Nvidia's great reserves of money. Nvidia's huge R&D budget. Fermi is competing with AMD's part that has a 188 watt TDP. Yet with all of these things in Nvidia's favor, we're looking at a part that users a great deal more power, puts out a lot more power, current rumors put at a lot more expensive (if you believe $499-$599), and current news put at 5-10% faster on average. Where is the progress? This thing is practically luanching half way in to a new generation of parts and it's barely faster while doing worse at all the other things to do indeed matter to many people.

Congrats to Nvidia on almost having the GeForce FermiX 5800 out the door.

And one more thing, the only thing retarded here are the analagies you've used. In NASCAR and with fighter jets they very much care about efficiency... how often a pit stop would have to be remade, how much range a jet has/it's performance with more fuel wieght are all considered. Not even those examples are about all out performance, as much as performance with those other things factored in... so your examples have really just done more to prove the opposite of the point you were trying to make. But then again, many enthusiasts also factor power use/heat in to their purchasing decision, so even though the point you made with your examples was the opposite of what you meant, it was the correct point they made.
 
Last edited:

scooterlibby

Senior member
Feb 28, 2009
752
0
0
Yeah I agree it is several factors that might make Fermi a huge letdown: 6 months late + performance only 10% better than AMD's flagship single GPU + TDP + price.

Really though, performance could have been (and who knows, may be) the saving grace. If Fermi was $500 and 30% faster than the 5870 people would not be nitpicking as much about TDP and the delay. Unfortunately all signs do not point to this outcome.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Yeah I agree it is several factors that might make Fermi a huge letdown: 6 months late + performance only 10% better than AMD's flagship single GPU + TDP + price.

Really though, performance could have been (and who knows, may be) the saving grace. If Fermi was $500 and 30% faster than the 5870 people would not be nitpicking as much about TDP and the delay. Unfortunately all signs do not point to this outcome.

And once again, no official reviews or benchmarks however if Fermi is only equal or slightly better at DX10 games, but is 23-30% faster in DX11 games, is it still a letdown?
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
And once again, no official reviews or benchmarks however if Fermi is only equal or slightly better at DX10 games, but is 23-30% faster in DX11 games, is it still a letdown?

Fermi would be a "letdown" to about 80% of this particular board, even if it shit golden eggs on command.

There are no official benchmarks, and they are already quoting percentages.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Really though, performance could have been (and who knows, may be) the saving grace.

Based on the performance hits Cypress takes with Tessellation enabled, we might very well see Fermi achieve much better minimum frame rates.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Fermi would be a "letdown" to about 80% of this particular board, even if it shit golden eggs on command.

There are no official benchmarks, and they are already quoting percentages.

And I think Fermi will be a must have product to plenty of people on this board even if all the rumors are correct.

Seeing as Nvidia has been so quiet, rumors are about all we have to go on. You would think that they'd have at least some good things to say at this point to try and stop people from buying AMD parts since they are readily available and most of the current Fermi related rumors are pretty negative. Some of these same rumor sites told us back in September that Fermi won't be here until Febuary at the earliest, April more realistic. Seems that they may have some information that is wroth listening too.
 
Last edited:

scooterlibby

Senior member
Feb 28, 2009
752
0
0
And once again, no official reviews or benchmarks however if Fermi is only equal or slightly better at DX10 games, but is 23-30% faster in DX11 games, is it still a letdown?

That's a hard question to answer. Given the lack of cardinal utility, some might not be disappointed by that. I think I would, though.
 

*kjm

Platinum Member
Oct 11, 1999
2,222
6
81
I’m very unbiased because I own both Nvidia and ATI/AMD cards. Working in electronics also helps me out in this area. IF the power usage and stats are right ATI would win this round but they better watch out. Nvidia will get there process down and will have a new chip to grow on. That is when we will see the tide shift unless ATI has been developing a new GPU architecture of their own. All the heated debates never bother me because I will end up getting the best for my needs at the right time or I would still be running 3DFX. I think most of us “old timers” think this way. I just wish we could get back to a four player game like the old days… 3DFX/Nvidia/ATI/Matrix. But them days are gone:)

Again get what you need or feel good about but the complaints about drivers from both parties are getting old. If you/we are buying the latest and greatest from either company you will have growing pains. The consumers have demanded the market the high tech live in.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I wonder how much benefit Fermi would derive from a triple slot cooler (exhausting heat out the back)?

With this arrangement the fan could be thicker and the heatsinks much larger. Wouldn't the U-shaped slot cut into the PCB work to help feed more air to a thicker fan?
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Fermi would be a "letdown" to about 80% of this particular board, even if it shit golden eggs on command.

There are no official benchmarks, and they are already quoting percentages.

Are you trying to claim 80% of this board is ATI fanboys? Because the most vocal people here seem to be those who have a dispensation towards NV, and then there are a few ATI fanboys, and then there seems to be a majority who want good performance, a competitive marketplace, and decent prices.

So yes, 80% probably will feel Fermi to be a letdown, because it doesn't bring the competitive marketplace they were hoping for, it doesn't give us decent prices (relative to 6 months ago) based on the information this thread is about.
 
Last edited:

*kjm

Platinum Member
Oct 11, 1999
2,222
6
81
I wonder how much benefit Fermi would derive from a triple slot cooler (exhausting heat out the back)?

With this arrangement the fan could be thicker and the heatsinks much larger. Wouldn't the U-shaped slot cut into the PCB work to help feed more air to a thicker fan?

CB you can bet it has been done in the R@D lab but that is just to much real estate to give up on the MB for most.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I would switch to consoles if the games were not so expensive. video cards right now offer very poor performance/price ratio and I too certainly dont want to pay 350 bucks for just a graphics upgrade that only 3-4 games even need. heck even the gtx260 cards are going for well more than I paid 16 months ago.

Having more competition in performance mainstream segment would really help value, but when is this going to happen with Nvidia always going large die?

P.S. I am still confused on why Nvidia doesn't make dual GPU HPC cards? Does anyone know the answer to this? Does the SLI bridge affect HPC calculations?
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Having more competition in performance mainstream segment would really help value, but when is this going to happen with Nvidia always going large die?

P.S. I am still confused on why Nvidia doesn't make dual GPU HPC cards? Does anyone know the answer to this? Does the SLI bridge affect HPC calculations?

It's not a multiple GPU issue, since their HPC lineup consists of boxes of cards. They sell as a discrete product a rack with 4 Tesla cards in it, so they are already making use of multiple GPUs for the HPC market, why they don't sell dual GPU cards is probably because they sell multiple cards instead.

http://www.nvidia.com/object/tesla_computing_solutions.html
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
It's not a multiple GPU issue, since their HPC lineup consists of boxes of cards. They sell as a discrete product a rack with 4 Tesla cards in it, so they are already making use of multiple GPUs for the HPC market, why they don't sell dual GPU cards is probably because they sell multiple cards instead.

http://www.nvidia.com/object/tesla_computing_solutions.html

I know the HPC cards are sold in clusters (4 Dual slot video cards per 1U rack), but those cards don't use a SLI bridge.

Connecting Dual GPU on a single PCB would be different right?
 

Daedalus685

Golden Member
Nov 12, 2009
1,386
1
0
Having more competition in performance mainstream segment would really help value, but when is this going to happen with Nvidia always going large die?

P.S. I am still confused on why Nvidia doesn't make dual GPU HPC cards? Does anyone know the answer to this? Does the SLI bridge affect HPC calculations?

At work all of our performance machines are built with the idea of a little down time as possible for as little money as possible. It is far more appealing to replace a single "simple" card than have to take out more than that. Ideally we want the power as decentralized as possible. Now, this would be a moot point if the GPUs on the cards were made in some sort of removable socket.. but I would never expect that to be the case.

I'd imagine it is similar in other places... they just decided that there was not enough of the market that would want two GPUs on a single board when getting two cards is no more difficult. Space is rarely a limitation (at least so far as taking up an extra PCI slot), cooling and complexity certainly are. I'm not sure what would be on the top of an HPC builders list but I'd have to guess the reliability level promised by a single vs. dual at this stage in the game would be different enough to scare many away from a dual.

So I suppose the real answer is that no one really cares, but those that do would prefer single GPU cards.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
At work all of our performance machines are built with the idea of a little down time as possible for as little money as possible. It is far more appealing to replace a single "simple" card than have to take out more than that. Ideally we want the power as decentralized as possible. Now, this would be a moot point if the GPUs on the cards were made in some sort of removable socket.. but I would never expect that to be the case.

I'd imagine it is similar in other places... they just decided that there was not enough of the market that would want two GPUs on a single board when getting two cards is no more difficult. Space is rarely a limitation (at least so far as taking up an extra PCI slot), cooling and complexity certainly are. I'm not sure what would be on the top of an HPC builders list but I'd have to guess the reliability level promised by a single vs. dual at this stage in the game would be different enough to scare many away from a dual.

So I suppose the real answer is that no one really cares, but those that do would prefer single GPU cards.

I'm talking about two smaller GPUs on a single PCB vs one large GPU on a single PCB.

Wouldn't the dual GPU HPC card be cheaper to purchase and own? (Even if it meant the entire card needed to be thrown out if one of the GPUs failed).

Or does the SLI bridge cancel out some of the advantages in manufacturing costs?

Nvidia must be making these large die GPUs for a reason. I am just trying to figure out if that reason pertains to satisfying gamers or making money in HPC? Or both?
 
Last edited:

nyker96

Diamond Member
Apr 19, 2005
5,630
2
81
They are pretty terrible.
$260 for the HD5850 at launch, 6 months later NV release something 5~10% faster and 30+% more expensive and consuming more power.
$380 for the HD5870 at launch, 6 months later NV release something 5~10% faster and 30+% more expensive and consuming more power.

Now OK, the prices for ATI have gone up, but that's because they have no competition. This is the NV competition, and we're ending up with higher prices for tiny performance improvements. That's terrible.

well it will be more expensive because it's on a much bigger die than what ATI is offering. At this moment of course we don't know what the performance will be. regardless I doubt they can afford to charge at a much lower intro price.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Fermi would be a "letdown" to about 80% of this particular board, even if it shit golden eggs on command.

There are no official benchmarks, and they are already quoting percentages.

Well, you can't ignore the fact that NVIDIA had announced a Friday release at PAX East, but then changed the NDA for benchmarks to the following Monday (after the conference is over).

If they had an awesome product and a booth at one of the biggest conferences that allowed direct contact with the press and customers, they would be pimping Fermi to hell and back all PAX East long. However, it looks more like they want to be as far away from the public eye as possible when the benchmarks go public. The question is "why?"

Of course it's still speculation at best, but IMO it doesn't really bode too well for Fermi performance. My guess is that Fermi is going to be similar to the X1800XT in that it won't perform badly per se, but it looks like it will be a little short of the mark considering the competition.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Well, you can't ignore the fact that NVIDIA had announced a Friday release at PAX East, but then changed the NDA for benchmarks to the following Monday (after the conference is over).

If they had an awesome product and a booth at one of the biggest conferences that allowed direct contact with the press and customers, they would be pimping Fermi to hell and back all PAX East long. However, it looks more like they want to be as far away from the public eye as possible when the benchmarks go public. The question is "why?"

Of course it's still speculation at best, but IMO it doesn't really bode too well for Fermi performance. My guess is that Fermi is going to be similar to the X1800XT in that it won't perform badly per se, but it looks like it will be a little short of the mark considering the competition.

Agreed. I think that seeing as Nvidia has been fairly quiet (and appears to be keeping the launch fairly low key) it is pretty telling. From a performance perspective I don't think Fermi will be a bad part, I just don't think it's going to be a great part either.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Well, you can't ignore the fact that NVIDIA had announced a Friday release at PAX East, but then changed the NDA for benchmarks to the following Monday (after the conference is over).

If they had an awesome product and a booth at one of the biggest conferences that allowed direct contact with the press and customers, they would be pimping Fermi to hell and back all PAX East long. However, it looks more like they want to be as far away from the public eye as possible when the benchmarks go public. The question is "why?"

Of course it's still speculation at best, but IMO it doesn't really bode too well for Fermi performance. My guess is that Fermi is going to be similar to the X1800XT in that it won't perform badly per se, but it looks like it will be a little short of the mark considering the competition.

Short of who's mark? It is going to be the most powerful GPU to date, so...
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Short of who's mark? It is going to be the most powerful GPU to date, so...


Short of the mark most enthusiasts had set - the people who this product is aimed at. It's coming six months later then it's closesest competitor, uses significantly more power (if the rumors are to beleived), has a lot more silicon and hundreds of milions more transistors, yet we're looking at 5-10% more performance? Seems pretty underwhelming.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Fermi would be a "letdown" to about 80% of this particular board, even if it shit golden eggs on command.

There are no official benchmarks, and they are already quoting percentages.

Short of who's mark? It is going to be the most powerful GPU to date, so...

Apparently, it's too early to guess at percentages, but not too early to claim a victory.