If fermi flops what happens to Nvidia

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

cbn

Lifer
Mar 27, 2009
12,968
221
106
CUDA isn't unfriendly, cuda just runs C code and Fortran code on nvidia GPU instead of a CPU... nothing unfriendly about that. AMD should catch up and offer the same.
Anyways, we don't want either to flop, ideally they would be equal. resulting in best prices and performance, and the most aggressive development and deployment of new tech.

AMD has Brook+. They also support OpenCL. However, I haven't found a way to compile for AMD in Linux and test without running on (or even having a) GPU like I can with CUDA. :(

At the moment it seems like ATI definitely has a good advantage in performance per watt from a gaming standpoint.

How long till we see them coming out with dual GPU video cards like HD5970 for HPC?

It would seem to me running two smaller die GPUs on a stick could yield better performance per watt. But apparently this isn't so (Nvidia still prefers big die single GPUs for HPC).
 

slayernine

Senior member
Jul 23, 2007
894
0
71
slayernine.com
At the moment it seems like ATI definitely has a good advantage in performance per watt from a gaming standpoint.

How long till we see them coming out with dual GPU video cards like HD5970 for HPC?

It would seem to me running two smaller die GPUs on a stick could yield better performance per watt. But apparently this isn't so (Nvidia still prefers big die single GPUs for HPC).

I agree, I would love to see a dual GPU single slot card to put in my mini gaming/home theater pc.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I agree, I would love to see a dual GPU single slot card to put in my mini gaming/home theater pc.

I was talking about HPC (High Performance Computing) not HTPC (Home Theatre PC) :D

P.S. One year ago I wouldn't have known what either acronym meant.
 

Nebor

Lifer
Jun 24, 2003
29,582
12
76
What bad experience have you had with ATi to disregard them even if Fermi flops? (unlikely that they won't have anything good, but let's imagine it did happen anyway)

I am curious only because your statement seems to mean you'll get only whatever is nVidia's best, no matter if ATi's best is cheaper and even a little better in performance.

I bought a GTX 295 a couple months after the 5870s came out. The only ATI I ever had (9800 pro) died an early, random death (the fan died.)
 

tommo123

Platinum Member
Sep 25, 2005
2,617
48
91
I bought a GTX 295 a couple months after the 5870s came out. The only ATI I ever had (9800 pro) died an early, random death (the fan died.)

ATi make the chips afaik. the rest is down to whatever reseller made the rest.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
ATi make the chips afaik. the rest is down to whatever reseller made the rest.

TSMC makes the chips. And as far as I can remember, at the time of the Radeon 9800's, Most common ATI cards were either Sapphire, or BBATI (built by ATI). ATI used Sapphire to build all of it's BBATI cards. So I guess you can say they were all built by Sapphire. Not sure if PowerColor, or HIS etc. actually just bought cards from Sapphire and slapped stickers on them.
 

slayernine

Senior member
Jul 23, 2007
894
0
71
slayernine.com
The thread should be titled something like "What if Fermi flops as a gaming card" because all indications are that it will do well in the 3d/rendering market. Considering that at this point Fermi is just a high end card, even if Fermi flops it probably won't even affect Nvidia's bottom line too badly, provided they cover in the $100-250 price range. A few people will buy GTX 470's and 480's for the novelty and pay a premium, even if performance isn't that hot.

What Nvidia will have to do is make sure that both the Fermi refresh and the cut down versions of Fermi are well designed and well situated from a price/performance standpoint. Nvidia's top end cards have almost always cost more than ATI's top (the 5870 debuted at $399; Nvidia's GTX 7800 512MB paper launched at $599 and quickly shot up to $699+ at retail, for example). Nvidia has basically squandered its chance to skim the market in a large way unless Fermi outperforms the 5870 so hugely that it's near 5970 performance). But both ATI and Nvidia make missteps (ATI with the 2900XT, Nvidia with the FX 5800). What's important is that they don't screw up two times in a row.

Nvidia may be forced to become leaner and meaner and come up with a more ATI-like design philosophy for Fermi's sequel.

It's not going to do well in the 3d/rendering market because it uses way too much power and produces too much heat for effective an effective server model. If you are building some sort of render farm you would not want to use Fermi cards because you will be looking at performance per watt if you have any sense.
 

slayernine

Senior member
Jul 23, 2007
894
0
71
slayernine.com
I bought a GTX 295 a couple months after the 5870s came out. The only ATI I ever had (9800 pro) died an early, random death (the fan died.)

I've had ATI and nVidia cards and I've had quite the opposite experience with all my nVidia cards eventually kicking the bucket in one way or another. I've only replaced ATI cards due to aging.
 

Mr. Pedantic

Diamond Member
Feb 14, 2010
5,027
0
76
It's not going to do well in the 3d/rendering market because it uses way too much power and produces too much heat for effective an effective server model. If you are building some sort of render farm you would not want to use Fermi cards because you will be looking at performance per watt if you have any sense.
According to NVidia Fermi will deliver about 8 times the double-precision computing capacity of GT200. Already GT200 Tesla GPUs wipe the floor with CPUs in applications that are suited for GPU computation. So I don't think you're quite on the mark there.
 

NoQuarter

Golden Member
Jan 1, 2001
1,006
0
76
I bought a GTX 295 a couple months after the 5870s came out. The only ATI I ever had (9800 pro) died an early, random death (the fan died.)

Aw that's not a death sentence. My buddy had a Geforce 5600 Ultra we put in a guest computer that the fan died on, so I strapped a giant 80mm fan to the card with rubber bands, it worked and probably better than the original fan :D
 

ZimZum

Golden Member
Aug 2, 2001
1,281
0
76
Aw that's not a death sentence. My buddy had a Geforce 5600 Ultra we put in a guest computer that the fan died on, so I strapped a giant 80mm fan to the card with rubber bands, it worked and probably better than the original fan :D

What, Ran out of twine and twisty ties? j/k :D.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
According to NVidia Fermi will deliver about 8 times the double-precision computing capacity of GT200. Already GT200 Tesla GPUs wipe the floor with CPUs in applications that are suited for GPU computation. So I don't think you're quite on the mark there.

What does "double precision" do? Is that the only type of calculation performance that matters for HPC?

Where is ATI when it comes to double precision?
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
Stolen off Schmide from another thread.(Copy paste)

ATI 5870

# Processing power (single precision): 2.72 TeraFLOPS
# Processing power (double precision): 544 GigaFLOPS

Fermi (no single precision numbers)

Double precision performance in the range of 520GFlops - 630 GFlops

So other than the event/error handling and ecc memory the cards are kind of 1 to 1.
 

NoQuarter

Golden Member
Jan 1, 2001
1,006
0
76
What does "double precision" do? Is that the only type of calculation performance that matters for HPC?

Where is ATI when it comes to double precision?

Double precision is for HPC. Single precision in games since it's not critical the calculations are exactly right, game engine code is optimized for approximations anyway since it's faster than finding the real values.

It's 64 bit floating point numbers instead of 32 bit floating point numbers.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
At the moment it seems like ATI definitely has a good advantage in performance per watt from a gaming standpoint.

How long till we see them coming out with dual GPU video cards like HD5970 for HPC?

It would seem to me running two smaller die GPUs on a stick could yield better performance per watt. But apparently this isn't so (Nvidia still prefers big die single GPUs for HPC).

what does this statement has to do with the quoting me about CUDA?
Both are true. AMD possess superior hardware, with better performance per watt and better overall performance.
Nvidia possess an advantage in GPU computational use, with CUDA allowing C and fortran code, their excellent SDK, etc...

Nvidia needs to catch up in gaming performance, and AMD needs to catchup in computational performance, its that simple.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Nvidia needs to catch up in gaming performance, and AMD needs to catchup in computational performance, its that simple.

Thanks.

Can someone explain the trade-offs a company must make in total computational power when building more double precision into a product?

Would it be possible for the entire architecture to be double precision?
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Can someone explain the trade-offs a company must make in total computational power when building more double precision into a product?
That is somewhat beyond me, but there are some guys here who can.

Would it be possible for the entire architecture to be double precision?
I thought that was the whole point of fermi to be entirely double precision architecture. mainly because nvidia is pushing it hard to various companies using it for scientific computations, etc. (there is some worry about future lockout from the GPU market by AMD and intel).
The fermi architecture involves some gaming sacrifices to get a better business product. Whether those sacrifices would be critical enough to make the product a failure in the gaming market depends on the exact final performance, price, and power consumption (yet to be seen; but rumers indicate that nvidia is in trouble... rumors could be wrong)
 

DominionSeraph

Diamond Member
Jul 22, 2009
8,386
32
91
When something is not sufficient enough, doesn't that make it "insufficient"?

Just have to point out that, "Not sufficient enough," is self-contradictory.

Sufficient is enough, by definition.

"Sufficient enough," is redundant. "Sufficient," is sufficient.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
55
91
Just have to point out that, "Not sufficient enough," is self-contradictory.

Sufficient is enough, by definition.

"Sufficient enough," is redundant. "Sufficient," is sufficient.

As long as you understood his meaning, I'll think we could all deal with the way he
said it. Just remove the word "enough" and we're all warm and fuzzy.
 

jpeyton

Moderator in SFF, Notebooks, Pre-Built/Barebones
Moderator
Aug 23, 2003
25,375
142
116
Fermi 1 WILL flop at launch. But I'm sure over the next 4 years, Nvidia can rebrand the GPU enough times to make it turn a profit.
 

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
If Fermi fails, we as a consumer get fucked.

Exactly my thoughts on this situation. It's enough that ATI has my orifices bleeding with their "price gouging".

I hate how everything is down to 2 competitors these days. Anti competitive law needs to made a little stronger because I don't think our current market model is working well for the consumer. It may not be long before there are only 2 cell phone carriers also. The things these big companies get away with disgusts me. The going rate for an unlocked iPhone is $970. All because of artificial price fixing due to exclusive carrier contracts. Palm is selling a LOCKED Pre to developers for $439. I want to puke. The system is not for the consumer it is set up for Wall Street fat cats. The rich get richer.

Although I can't blame ATI for price gouging in this situation. It would have been nice to have a Matrox around while Nvidia works on its "issues". Let's keep merging small creative development companies into huge "own our asses" entities. That's what's good for us apparently. At this point I want Intel to enter the high end graphics market, but those clowns are responsible for keeping Nvidia out of the chipset market. They are all money grubbing, greedy, bastards.

Sorry I went all P&N on you guys.
 
Last edited:

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
That is somewhat beyond me, but there are some guys here who can.


I thought that was the whole point of fermi to be entirely double precision architecture. mainly because nvidia is pushing it hard to various companies using it for scientific computations, etc. (there is some worry about future lockout from the GPU market by AMD and intel).
The fermi architecture involves some gaming sacrifices to get a better business product. Whether those sacrifices would be critical enough to make the product a failure in the gaming market depends on the exact final performance, price, and power consumption (yet to be seen; but rumers indicate that nvidia is in trouble... rumors could be wrong)

If AMD and Intel lock out 3rd party competition in the GPU market I will puke. What's our Chief Technology Officer doing? It's about time we see some intelligent sense of regulation in technological markets. Then again seeing how well they handle financial markets I should be scared shitless either way.

Maybe Nvidia needs to make an x86 processor. But I bet the cost of entry into that business makes it impossible for anyone now. How about a Warren Buffet/Bill Gates effort to save the future of information technology from money grubbing anti-competitive hos?