• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Giving nvidia serious consideration

Bradtech519

Senior member
Now that my 5850 is done for. I've been looking into 6900s/7900s and now some Nvidia cards. They seem to be strong for SETI and other projects. SETI is showing the 660ti as the top card right now. I've always done milkyway and SIMAP.. SIMAP is CPU only.. Einstein looks like it does Nvidia also. Anyone have any preferences on Nvidia or ATI cards?
 
I refuse to do ATI, and have for a very long time now, simply due to their drivers. The drivers can be such a pain, it isn't worth it to me to mess with them. YMMV
 
I've heard a lot about the driver issues in the past. I haven't really had any problems with Windows and ATI/AMD drivers. My buddy who is a big linux guy won't do ATI/Linux. I remember back in the early to mid 2000s they had bad Windows drivers too. I'm mainly weighing the options right now in which projects I want to throw more support behind with a new GPU purchase.


I refuse to do ATI, and have for a very long time now, simply due to their drivers. The drivers can be such a pain, it isn't worth it to me to mess with them. YMMV
 
I readily refuse to do either since I am a Linux and ATI/AMD drivers are well terrible and nVidia deliberately reduced our computational power on alot of the projects. I still have a couple but really slow. I have a 9300M GS and GTX 525M
 
if you don't mind rebates, mircocenter has Zotac GTX 570 for 169.00. I
just picked up 2 of them and so far so good, nice little gpu's for the price.
 
Generically:

NVIDIA gives you more options.
ATI has fewer, but is substantially better at some of those.

I was never bitten by the bad driver bug when I had a 4850. However, I've always delayed, and let others get bitten.
This holds true on the green side as well. (The 295ish driver sleep bug comes to mind)
 
I'm going to Nvidia next time I upgrade. You have to give up 1 core on your SMP to keep AMD card working. Nvidia doesn't have that issue. Maybe AMD will get that fixed sometime in future but it's a known issue.
 
The potential for problems exists on both sides. Distributed compute often has problems with particular models so its important to understand which programs you intend to use and look at the speed differences and potential problems.

I would never go as far as to say one company produces a better more reliable product than another, I simply have too much counter evidence to tell that assertion is false. Instead its about choosing the trade off and avoiding the bugs that exist in both platforms.
 
I'm considering ATI next time just so I can develop for it. But I haven't really planned when next time will be. And last time I developed for ATI (without actually having a card) I found the drivers to be a problem. (Their OpenCL compiler appeared to have a bug.)
 
When you look from a different perspective you come up that the top "compute" cards for AMD and Nvidia cost $3,000. Then you look at the cores. Then you look at the consumer end and here is what you find, "AMD gives away the farm" (there are more cores in the consumer card than in its top $3,000 card).
Now what about Nvidia and cuda cores? The consumer card has 1/3 if not closer to 1/4 the cores than Nvidias pro card.
Brute numbers favor AMD and best hardware 3770K and two 7950, 70, 70 GHZ pull 200,000 to 250,000 PPD
 
Last edited:
I considered purchasing an AMD card recently but I went with nvidia again because my 2 favorite gpu projects, sciencewise, are GPUGrid and F@H. AMD cards are not an option for GPUGrid and have horrible performance on F@H. As for drivers on linux, I've had issues with both over the years.
 
Well I have Nvidia cards because they are in the laptops I buy. So I have to have them in them, though I hate them and their terrible FP64 performance means I don't get maximum performance on PrimeGrid Generalized Fermat Nvidia Workunits because they use the FP64 portion of the card and not the single precision/binary32. So FP64 operates at 1/32th of the cards single precision so example 140 GFLOPS of the 525M that means that FP64 gets 4.3 GFLOPS of compute power. 🙁 And now you see why I need to code a OpenCL program for AMD cards on PrimeGrid. :\
 
Back
Top