15th Annual Folding@Home Holiday Season Race: Mark's Marauders win!

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

VirtualLarry

No Lifer
Aug 25, 2001
56,324
10,034
126
What I didn't get, was, my GTX 1650 4GB D5 got 400K+ PPD, and my RX 5600XT (Navi10 with 192-bit memory) supposedly gets 800K+ PPD, but together, they were only doing 500K PPD, with an unloaded 6C/12T Ryzen CPU. :(
 

TennesseeTony

Elite Member
Aug 2, 2003
4,208
3,634
136
www.google.com
You're not as pretty as I am, perhaps? Although your avatar suggests otherwise....I think you are a VERY attractive woman, Larry...


(mixed drivers causing a bit of conflict?)
 
  • Wow
Reactions: Icecold

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,328
4,913
136
Assuming my 5800X comes tomorrow or Wednesday I should be able to bring another rig online for a modest end-of-the-race contribution ;)
 

StefanR5R

Elite Member
Dec 10, 2016
5,497
7,786
136
Point of interest, Last time I ran 1070's, I got 650k Windows, 750-800k ppd Linux. Now they are 1.1M Windows/Linux. Wow! New cuda app is MOST impressive! And I think last year the 1080Ti was about 1.2M Windows, now it is 2.2M. o_O
On Linux, GTX 1080Ti @ 250 W, PCIe v3 ×8, is making 2.7+ M PPD.

nvidia-smi dmon -d 10 -s pucmt is showing averages of 3300 MB/s reception and 450 MB/s transmission. That is, the reception direction is utilized at >40 % of its peak bandwidth per direction of PCIe v3 ×8, averaged over time.

That is, FahCore_22's CUDA implementation has got in common with the previously used OpenCL implementation that there is a lot of data copying happening all the time. Which in turn means that the Linux driver stack is still better suited than the Windows driver stack. (Windows performs even more data copying on the side. This enables driver version updates and driver crash recovery without system reboot, even without user session termination.)

What I didn't get, was, my GTX 1650 4GB D5 got 400K+ PPD, and my RX 5600XT (Navi10 with 192-bit memory) supposedly gets 800K+ PPD, but together, they were only doing 500K PPD, with an unloaded 6C/12T Ryzen CPU. :(
How are they attached? (How many PCIe links; directly at the CPU or via the PCIe switch of the southbridge; if the latter, PCIe v3 or v2?)

Are they working at default or modified power targets? (At default board power target, this would make for 75 W + 150 W for the GPUs, and perhaps 50 W for the Ryzen if only running the two GPU FahCore instances, so ≈280 W to vent away from these three heat sources.)

Do they have stock BIOS or custom BIOS?

(I don't remember if you posted such data already. If so, apologies that I neglect to revisit this or other threads.)
 

VirtualLarry

No Lifer
Aug 25, 2001
56,324
10,034
126
How are they attached? (How many PCIe links; directly at the CPU or via the PCIe switch of the southbridge; if the latter, PCIe v3 or v2?)

Are they working at default or modified power targets? (At default board power target, this would make for 75 W + 150 W for the GPUs, and perhaps 50 W for the Ryzen if only running the two GPU FahCore instances, so ≈280 W to vent away from these three heat sources.)

Do they have stock BIOS or custom BIOS?

(I don't remember if you posted such data already. If so, apologies that I neglect to revisit this or other threads.)
The RX 5600XT cards are each in the primary PCI-E x16 slots on Ryzen 6C/12T boards, one B450, one X370.

The GTX 1650 cards are in the chipset-supported secondary PCI-E slots.

THE GTX 1650 cards are stock power and clocks, the RX 5600XT cards are undervolted, underclocked, and power-limited, for mining ETH, and keeping my overall AC usage lower.
 

StefanR5R

Elite Member
Dec 10, 2016
5,497
7,786
136
OK, thanks. Then the 5600XT have plenty of bus bandwidth but the core clocks in shader-intensive workloads such as F@H are reduced due to the power limit.

PPD in F@H are not linearly proportional to clocks though. On one hand, throughput of the shaders presumably scales less than linearly with core clock. On the other hand, F@H PPD scale more than linearly with shader throughput due to the quick return bonus.

On the positive side, the power limit on the 5600XT obviously reduces the overall cooling demand of these dual-GPU computers.

--------

In contrast, the GTX 1650 may be limited by bus bandwidth to some degree. I reported the average PCIe transfers of GTX 1080Ti; smaller cards use less, therefore a potential impact from PCIe bandwidth constraints (and maybe PCIe switching latency) on GTX 1650 should be less pronounced. Still, there may be an impact. Depending on the layout, population, and BIOS settings of the mainboards, this can be the GPU attachment:

Ryzen -> PCIe v3 ×4 -> X370 -> PCIe v2 ×4 or ×1, shared bandwidth with SATA/USB/PCIe attached devices
(There are actually 8 PCIe v2 lanes supplied by X370 in total. But some lanes go to Ethernet, sound, perhaps WLAN, maybe even other devices. Any PCIe slot which is driven by X370 will therefore be either ×4 or ×1 electrically.)

Ryzen -> PCIe v3 ×4 -> B450 -> PCIe v2 ×4 or ×1, shared bandwidth with SATA/USB/PCIe attached devices
(There are 6 PCIe v2 lanes supplied by B450, but see above.)

--------

I don't know whether the factors which I noted can fully explain the low PPD from your cards combined, or if there are additional reasons to it.
 

Pokey

Platinum Member
Oct 20, 1999
2,766
457
126
Looks like the Marauders are putting the hammer down here at the end.

On the bright side, yesterday we band of (20) holiday folders produced over 50% of the points produced by the entire (294) Team Anandtech team. Sweet! And I'm sure there are other positives to come out of this.

In the meantime, keep going to the last....................... (insert huffing and puffing emoji here)
 
Last edited:

Endgame124

Senior member
Feb 11, 2008
955
669
136
I had the opportunity to buy a MSI SUPPRIM 3090 today at microcenter... and passed on it. Just couldn't shell out for an $1800 video card with sketchy waterblock support. Heading back to microcenter tomorrow, maybe it will be my lucky day?
 

VirtualLarry

No Lifer
Aug 25, 2001
56,324
10,034
126
Well, some of my rigs are back to folding. One of my rigs, I think that one with 2x RX 5700XT "Raw II" XFX cards, the second card listed in F@H client, kept showing "Running", "Ready", loop 10x, then "Failed".

And I'm having trouble with Anydesk on that PC, being able to launch and view the AMD Adrenaline 2020 control panel, I just get a transparent glass window. CPU is not pinned.

Other PC running similar config but with Asus "Dual" cards, is fine.

Had other trouble with the XFX Raw II cards initially too, which I put down to the PSU that was in that PC, a Raidmax that had sat in storage for over a year. Kept getting restarts/shutdowns, followed by some "Code 43" errors, that went away with a full power-down and power-up.

Also, once I got it deployed mining, with a brand-new 650W 80Plus Gold Rosewill PSU, I still had issues where the secondary GPU (the cooler one!), would simply "drop off" the system, and disappear... from both the mining app and the Device Manager (!)

Not sure what's up with this rig, whether it was the rig that was in storage for a year, and degraded somehow, or got rusted, or there's dust in the slots, or what, or whether the two cards were flaky when I got them (new!), I don't know. I'm concerned that I can't coax one of the cards to fold. Might have to RMA both of them, which is a PITA in today's GPU-shortage climate. They might put me on a waiting-list, and RMA me two 6800 cards, maybe... I could only hope!
 

ao_ika_red

Golden Member
Aug 11, 2016
1,679
715
136
I never fold since the pandemic happened and wow, COVID WUs take very long time to complete. But I'm not complaining, my PPD is nearing 400k which at least 100k more than I used to get in 2019.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,324
10,034
126
Trying reseating the cards and blow the slots with compressed air, might work?
Yeah, that's on my to-do list, before RMA'ing the card(s). I'm just lazy, and if I can "tweak" them to mine, I'd rather make money than mess with it. Then again, every so many days, one card goes offline, and I have to physically power-down and power-up the box to fix it. Maybe I should tend to it after all...
 
  • Like
Reactions: Assimilator1

blckgrffn

Diamond Member
May 1, 2003
9,122
3,052
136
www.teamjuchems.com
Yeah, that's on my to-do list, before RMA'ing the card(s). I'm just lazy, and if I can "tweak" them to mine, I'd rather make money than mess with it. Then again, every so many days, one card goes offline, and I have to physically power-down and power-up the box to fix it. Maybe I should tend to it after all...

Back when I was mining, it was annoying all the time anyway. DDOS attacks against your pool, etc. Seemed like so much babysitting.

In retrospect I should have gone into it much, much more aggressively. Oh well. :)

What I am saying is if they mine a few days at a time without issue, that seems like par for the course to me. I'd expect an RMA to take a couple weeks, minimum, and odds are really high you'll get the same card you sent in, and maybe a repaired version of someone else's failure. I hate RMA'ing things so much!
 

VirtualLarry

No Lifer
Aug 25, 2001
56,324
10,034
126
What I am saying is if they mine a few days at a time without issue, that seems like par for the course to me. I'd expect an RMA to take a couple weeks, minimum, and odds are really high you'll get the same card you sent in, and maybe a repaired version of someone else's failure. I hate RMA'ing things so much!
That's my feeling too. If it's just a "little cantankerous" and "needs a kick in the rear" (power-cycle) once a week, that's tolerable to me. Among other things, just dealing with Windows / driver / Firefox / mining software updates can be just as much of a pain and require just as many reboots.
 

Endgame124

Senior member
Feb 11, 2008
955
669
136
With Rosetta out of work, I added a few more CPU to F@H. Is probably only 10k ppd, but I’m all in.
 

Assimilator1

Elite Member
Nov 4, 1999
24,120
507
126
What CPUs are you adding? My Ryzen 3600 is doing ~90-120k, but mostly 100-110k ppd. My i7 4930k @4.1 GHz was just adding 50-60k though! lol