• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

GeForce GTX Titan....sucks?

TennesseeTony

Elite Member
According to the wiki(s), the original Titan's double precision is at least 1300 GFLOPS, whereas my fleet of R9280X's produce 800-1000. So I picked one up off eBay, and am VERY disappointed thus far.

Milkyway goes from 24 seconds on the 280's, to well over 300s on the Titan, single task.

POEM likewise is no better, from about 70minutes on the 280's, to at LEAST 80 minutes on the Titan (double tasks running on both cards).

Even worse, the CPU usage has increased substantially for the Titan, on both projects, nearly half the GPU time!

So my hope is that one of you have run a Titan before, and can confirm the card either doesn't live up to the specs, or that perhaps I'm doing something wrong.
 
I think you have to turn on extra double-precision performance with Titan.


Thank you for that tip. No improvement though.

I'm still trying to research and find my own answer, but so far what I gather is that the specified double precision value is only on CUDA tasks, not open_CL tasks.
 
Last edited:
Looking at assimilator's mw@home times. The titan can run 8 wu at once? Maybe check their forums for any config tricks.

Yeah I saw that and tried as many as 12 tasks on MW, being amazed with 6GB of RAM on hand. 🙂 But no better.

Searched the forum there and nothing of use. :\
 
Thank you for that tip. No improvement though.

I'm still trying to research and find my own answer, but so far what I gather is that the specified double precision value is only on CUDA tasks, not open_CL tasks.

From what I've heard, NVidia, kinda, sorta, "supports" OpenCL. As you can see.
 
From what I've heard, NVidia, kinda, sorta, "supports" OpenCL. As you can see.
This.

The FLOPS measurements are not useful if Nvidia gets their numbers running CUDA benchmarks, and the R9s get theirs running OpenCL.

What does the application use? My quick look at wikipedia indicates that it can use CUDA:

https://en.wikipedia.org/wiki/MilkyWay@Home

That data throughput massively outpaced new user acquisition is mostly due to the deployment of client software that uses commonly available medium and high performance graphics processing units (GPUs) for numerical operations in Windows and Linux environments. MilkyWay@home CUDA code for a broad range of Nvidia GPUs was first released on the project's code release directory on June 11, 2009 following experimental releases in the MilkyWay@home(GPU) fork of the project. An application for ATI GPUs is also available and currently outperforming the CPU application.[citation needed] For example, a task that requires 10 minutes using a Radeon HD 3850 GPU or 5 minutes using an Radeon HD 4850 GPU, requires 6 hours using one core of an AMD Phenom II processor at 2.8 GHz.[citation needed]
But that doesn't really give any indication of how efficient it uses resources.

Have you tried benchmarking it in other projects? For example, here's how it does in Folding@Home double precision:

54899.png


Where the enhanced double-precision lets it dust the 780, 580, and 680. Compared with Single-precision where it's pretty close:

54897.png
 
Nice reply! Thank you for all that.

Milkyway has consolidated to openCL it appears.

Primegrid still has CUDA, but as it's the WR units, it will take some time to see if it's useful there. Days.

I think I'll keep it long enough to get my GPUGrid score up then sell it.
 
I read that the TitanZ (double processor model) gets hit hard in double precision with newer drivers using CUDA 7.5 so I tried the Titan in an older system still running Win7, and found a suitable archive driver that didn't include the CUDA 7.5 "update" and MW and POEM still run slow. No improvement at all.

Ah well, I think I'm done fiddling with this thing.

Thanks for all the input!
 
Last edited:
Primegrid still has CUDA, but as it's the WR units, it will take some time to see if it's useful there. Days.

Wow. Even cuda specific WU's perform worse than the R9 280X. 53% after 37 hours, which is about the same number of hours in which the 280X's would be finished with a WR PG WU. Looks to have another 35-ish hours to go.
 
I did not know you could use video cards for DC. Cool.
Yep, IIRC (big if 😉), Folding@H was the 1st project to use them, they started using X1900s (& the PS 3) late in 2006, I remember crunching with my X1950 Pro in 2007 🙂.

Tony
Did you not get improved output with MW running multiple WUs on the Titan??
 
The gpu projects that I'm truly interested in are GPUGrid and Folding@home which is why I have only Nvidia cards. Both use molecular dynamics based algorithms and are single precision. For the above projects, the best bang for the buck and a bonus of low power consumption is the GTX970. If you have a little extra money go for the GTX980.
 
Not even close. He had something like 25 seconds? Mine merely went up in ~3 minute increments depending on the number of concurrent WUs running. 3m for one, 9m for three, 36m for 12. Per task.
 
Very odd,...... oh wait did you remember his per WU time was the time derived from running 8 WUs divided by 8? It wasn't a time derived from running 1 WU.
 
Back
Top