Article Folding@Home RTX 2060 performance

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

UsandThem

Elite Member
May 4, 2000
16,068
7,380
146
This looks like the best price/performance card for Folding ($349 Founders Edition):

(BTW, not a big fan of the auto-loading videos on every single page now like at Tom's :rolleyes:)

https://www.anandtech.com/show/13762/nvidia-geforce-rtx-2060-founders-edition-6gb-review/13

105059.png
 
  • Like
Reactions: biodoc and Ken g6

crashtech

Lifer
Jan 4, 2013
10,524
2,111
146
Spot check in Mint puts my 2060 at 941K ppd, the initial incorrect points estimating has seemingly resolved itself, or I'm simply running the right WU from the right server now.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,555
14,511
136
Spot check in Mint puts my 2060 at 941K ppd, the initial incorrect points estimating has seemingly resolved itself, or I'm simply running the right WU from the right server now.
And I have a 2080TI that WAS good, and now it says 200k, even though its real PPD is 2.2 million.

The servers are almost random, I have no idea how long this will continue. Use the point calculator anytime you are curious, you can't trust the screen to give you the correct PPD estimation.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,339
10,044
126
No love for the GTX 1660 ti in this thread, the "almost RTX 2060" card? I bought a couple of those, both identical models, MSI Ventus OC GTX 1660 ti.

I have yet to try them in F@H, although I used them in BOINC / PrimeGrid race last week. They're a little better (and a bit lower-powered) in mining than my AMD Polaris GPUs, but other than that, they're just OK. However... being NVidia, and knowing that NVidia is easier in Linux to set up than AMD... I might just set them up under Linux for Dec's F@H race. That would be ... interesting, for me. I'm more a Windows user than a Linux user, but I've toyed with Linux off and on for a number of years now. Still haven't quite found it to be adequate enough (for me) to be my daily-driver. It's not gaming, either, just that Linux seems like it has far less GUI-toy utilities than Windows, and I like my GUI-toys.
 

crashtech

Lifer
Jan 4, 2013
10,524
2,111
146
I don't know too much about the 1660ti, but would be interested. In the past I'd used this chart to help me make purchasing decisions by using a simple formula: ppd/($+E), where $ is the purchase price, and E is the total estimated energy cost of DC'ing 24/7 with the card over a set period, I use 2 years. But the chart is not being updated anymore; data for the 2060 is scarce, and missing for the 1660ti.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,555
14,511
136
The 2060 has about 33% MORE CUDA CORE. nOT POSITIVE, BUT i THINK THATS THE BIG THING IN f@h 1920 vs 1536.

dang caps again, not going to retype.
 
  • Haha
Reactions: Assimilator1

crashtech

Lifer
Jan 4, 2013
10,524
2,111
146
@Markfw & @VirtualLarry , if the cuda core count scales linearly with ppd, the price differential between the two parts means that they should be very close in ppd/dollar without factoring in electricity. Maybe the lack of tensor cores makes the 1660ti more efficient, but I would suspect the 2060 to be the better value when power is considered, just by virtue of packing more cores onto the same board. I've seen the trend of more cores per board giving more ppd/$+E pretty consistently up to a point; flagships, and sometimes also the model below, tend to be priced too high to overcome their power efficiency. So most of the time, cards like the 1070/1070ti and 2070/2070 Super tend to lead their respective generations in ppd/$+E, but this is in my area where electricity hovers around 10 cents a kWh. Higher electricity prices will skew towards more expensive cards, low electricity prices towards lower priced cards.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,555
14,511
136
@Markfw & @VirtualLarry , if the cuda core count scales linearly with ppd, the price differential between the two parts means that they should be very close in ppd/dollar without factoring in electricity. Maybe the lack of tensor cores makes the 1660ti more efficient, but I would suspect the 2060 to be the better value when power is considered, just by virtue of packing more cores onto the same board. I've seen the trend of more cores per board giving more ppd/$+E pretty consistently up to a point; flagships, and sometimes also the model below, tend to be priced too high to overcome their power efficiency. So most of the time, cards like the 1070/1070ti and 2070/2070 Super tend to lead their respective generations in ppd/$+E, but this is in my area where electricity hovers around 10 cents a kWh. Higher electricity prices will skew towards more expensive cards, low electricity prices towards lower priced cards.
I did a little spreadsheet. Below is thew result
2070 super
2560​
500​
0.195313​
8+6
2070​
2304​
480​
0.208333​
8+6
2060 super
2176​
420​
0.193015​
8​
2060​
1920​
345​
0.179688​
8​

It looks like the 2060's all have 8 pin power, and the 2070's 8+6 power. The $/cuda core is cheapest on the 2060. The 2060 looks still to be the one to get.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,339
10,044
126
I was originally considering a 2070 Super, actually, mostly for greater mining profits after electricity is factored in, but due to my lack of restraint for purchasing something "new and shiny", followed by an opportunity to buy yet another of my same card at a reasonable discount, I ended up with two GTX 1660 ti cards.

I'm going to be putting them on F@H soon enough, but I was curious how I should configure them, with the base CPU being a Ryzen R5 1600 6C/12T. Should I leave two threads ("CPUs" in F@H config parlance) off of the CPU worker settings, that leaves (or should) a full CPU core free for hand-holding the NV GPU.

I'm honestly not sure if I want to run the CPU on F@H, just because I run the GPU on F@H. Couldn't I disable the CPU worker slot, and put the CPU on WCG or something? I'm not sure that the effort/reward is there for CPU folding anymore, unless you have like a 28-core boxen, and even then...
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,555
14,511
136
I was originally considering a 2070 Super, actually, mostly for greater mining profits after electricity is factored in, but due to my lack of restraint for purchasing something "new and shiny", followed by an opportunity to buy yet another of my same card at a reasonable discount, I ended up with two GTX 1660 ti cards.

I'm going to be putting them on F@H soon enough, but I was curious how I should configure them, with the base CPU being a Ryzen R5 1600 6C/12T. Should I leave two threads ("CPUs" in F@H config parlance) off of the CPU worker settings, that leaves (or should) a full CPU core free for hand-holding the NV GPU.

I'm honestly not sure if I want to run the CPU on F@H, just because I run the GPU on F@H. Couldn't I disable the CPU worker slot, and put the CPU on WCG or something? I'm not sure that the effort/reward is there for CPU folding anymore, unless you have like a 28-core boxen, and even then...
I figure what percent least one or 2 threads to the CPU, and put the rest on WCG. F@H on CPU is very inefficient.
 

crashtech

Lifer
Jan 4, 2013
10,524
2,111
146
I was originally considering a 2070 Super, actually, mostly for greater mining profits after electricity is factored in, but due to my lack of restraint for purchasing something "new and shiny", followed by an opportunity to buy yet another of my same card at a reasonable discount, I ended up with two GTX 1660 ti cards.

I'm going to be putting them on F@H soon enough, but I was curious how I should configure them, with the base CPU being a Ryzen R5 1600 6C/12T. Should I leave two threads ("CPUs" in F@H config parlance) off of the CPU worker settings, that leaves (or should) a full CPU core free for hand-holding the NV GPU.

I'm honestly not sure if I want to run the CPU on F@H, just because I run the GPU on F@H. Couldn't I disable the CPU worker slot, and put the CPU on WCG or something? I'm not sure that the effort/reward is there for CPU folding anymore, unless you have like a 28-core boxen, and even then...
I for one will be very interested in what you can do with those 1660ti's, it will be valuable data! As far as running CPU on F@H, it's really not worth it, though if the TeAm is in a close race situation I personally will throw everything at it, including CPUs; if you have enough cores, it will definitely plump up your daily output!

Edit: but yes, you should leave a couple of logical cores for F@H and system housekeeping. I heard an old wive's tale once that F@H does not not like odd numbers of CPU cores in its slots, whether there is anything to it or not, I tend to leave out one for good luck and make it an even number, lol.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,339
10,044
126
Well, that's disappointing. GTX 1660 ti won't fold. Just says "Ready", and if I click the GPU slot, and click "Fold", it just sits @ "Ready" again.

I remember this PC, complaining when I put the GTX 1660 ti in, and Windows 10 auto-installed the drivers, then I downloaded the newest NV drivers and tried to install them, and for some reason, it wouldn't accept the "Standard" drivers, it wanted the "DCH" drivers.

Does F@H not work with the "DCH" drivers? I don't know how to switch this boxen to prefer "Standard" drivers. It's not a branded OEM box, either, it's a custom DIY rig, so I don't know why Windows is insisting on the "DCH" drivers.

Ital
7mo
Close

DCH is the default Nvidia driver type installed via Windows Update during installation of the latest Windows 10 version 1809 if your PC is connected to the Internet. Once that's installed, your PC can only accept DCH Nvidia drivers. It's not just for Windows 10 S. Follow MoKiChU's instructions in the previous post if you want to get the latest Nvidia Control Panel for DCH drivers. If you want to go back to the Standard drivers, you can use DDU to clean out the DCH drivers while not being connected to the Internet. You can get DDU from: https://forums.geforce.com/default/...or-new-cpu-core-analyzer-updated-02-14-2019-/ Afterwards just install the Standard drivers as usual (while still not being connected to the Internet), and from then on it will only accept Standard drivers.

Frikken Microsoft. Always making things harder than they need to be.

Edit: So why isn't Folding, Folding, on my GTX 1660 ti. Do I need "Standard" drivers rather than DCH? Or do I need OpenCL support?



---

Bottom line, I don't think that the F@H client even supports the GTX 1650/1660/1660ti yet. The GPU chip type gets detected, in the web client and Advanced client, but it just won't "Go". If it was missing OpenCL support, it wouldn't detect at all, would it?
 
Last edited:

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,555
14,511
136
Larry, first, go to the log file. I had the same problem a week ago. I rebooted, and it cleared

IF YOU SEE DATABASE LOCKED in the log file, that will fix it most likely.

If not, whats the error in the log file ?
 

crashtech

Lifer
Jan 4, 2013
10,524
2,111
146
Well, you don't really want the CPU slot right now unless you are feeling chilly. I don't have a solution for your GPU problem atm. I've been dealing with Linux issues lately that usually make Windows seem easy.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,339
10,044
126
Something about couldn't detect openCL-index in GPU slot. So I manually adjusted the GPU slot, to OpenCL Index = 0, CUDA Index = 1.

Then I shut down F@H and re-started it. Seemed to work, now it's folding. Estimated 570K PPD, for just the card? Seems a little low, but then again, this is Windows 10.

Edit: Maybe I do need to search the drive for "OpenCL" under the Nvidia driver directory, and copy the .DLL files to the F@H ProgramFiles directory.

Edit: Well, it looks like telling it to manually use the GPU "As CUDA" worked, sort of, not sure if I'm seeing full performance. PPD is around 570K on both boxes, for the GTX 1660 ti card(s), and around 30-40K PPD for 10 threads of a Ryzen R5 1600 CPU. Sigh. Not nearly the PPD I was hoping for, seems the 2060 is way out ahead for some reason.

Then again, these cards only take like 120W (GPU portion only)? At least, mining they do.
 
Last edited:

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,555
14,511
136
570k for a 2060 in win 10 is correct. 800-1000 k is correct for linux.

Oh, wait, he said 1660ti, that could be right and linix may be 700k, its way more efficient.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,339
10,044
126
I guess maybe I shouldn't complain, PPD/W seems decent. My RX 580 4GB Nitro+ (6+8 pin), is estimating around 360K PPD for a 14231 (108,0,6) FahCore 0x21.

RX 470 4GB mining edition, around 320K PPD, CPU around 66K PPD (9 threads of a Ryzen R5 3600). Total around 760K PPD for this box.

So 760K + 600K + 600K, that's about what I've got for firepower for the F@H race in Dec., unless I upgrade, or hook up a fourth system. (I've got some in reserve.)

Considering last year, I was at around 900K-1.3M PPD, with a pile of RX 570 cards, this seems like a PPD and PPD/W improvement.
 

biodoc

Diamond Member
Dec 29, 2005
6,262
2,238
136
I have some F@H data for my rig with 2 RTX cards (2070 @ 2070 super) on linux mint 19.2. The default cap for power in the 2070 is 180 watts and 190 watts for the 2070 super. I followed TPF for both cards and found I could reduce the power cap for both cards to 160 watts without affecting TPF significantly. The rig is averaging ~3 million PPD. My dual 1080Ti rig is getting ~2.2 M PPD. The good news is I didn't expect the output to be so high so that was a nice surprise. The "'bad" news is the 2070 super card is no better than the 2070 for project #'s 14228 and 14229. The nvidia driver version is 430.50 for both machines.

fah_ppd.png