Been reading the Folding optimization threads and they mention the need to dedicate a CPU core per GPU. I don't see how that's done using the folding client. Any help would be much appreciated.
All it means is to leave a core free.
Say you have 6 cores. You fold with 2 GPUs. You should keep 2 cores free
So if you do BOINC you want to do only 4 tasks. One core for each task and one core per GPU, would be 6 cores used.
That's all it is.
All it means is to leave a core free.
Say you have 6 cores. You fold with 2 GPUs. You should keep 2 cores free
So if you do BOINC you want to do only 4 tasks. One core for each task and one core per GPU, would be 6 cores used.
That's all it is.
Thanks, I've done that and it did improve performance. That being said, I'm not sure it's "dedicated" because it looks like work is still being distributed to all cores, just less. When I look at the GPU-Z load it remains high, in the 90's and above, but will drop intermittently. I freed up additional CPU cores ,threads, til it got to the point it was a reasonable balance, but more than 1 free core. Was hoping it would be possible to make some sort of assignment between the core and GPU to better optimize the load.
The Windows process scheduler moves processes from core to core all the time (which is idiotic, because of the need to clear out processor register and caches). This can be prevented by either employing a tool like process lasso, or by switching to Linux whose kernel has a saner process scheduler.
Folding@Home performance on Linux is better than on Windows anyway, due to differences in the graphics driver architecture. On Linux, the GPU is a lot more constantly utilized by the Folding@Home application. Depending on the size of the GPU, this can give a small or a large performance advantage, coupled with higher power requirement but also somewhat improved performance-per-Watt.
Some more aspects about CPU workloads concurrent with F@H GPU feeder processes:
Some Intel CPUs clock down the entire processor if one of the cores runs an AVX2/FMA workload. (Other Intel processors either do not clock down, or clock down only the actual core which runs that workload. In the latter case, Linux' scheduler is again beneficial over Windows' nonsensical scheduler.)
Many Intel CPUs (and some AMD CPUs) run at different turbo clocks depending on how many cores are used at a time. (This too is better handled by Linux than by Windows.) If the difference between few-core and many-core turbo is very big, you should avoid any other CPU load besides F@H's GPU feeders altogether if F@H PPD is paramount to you.
If the CPU workload requires a lot of RAM bandwidth or/and a lot of L3 cache, there is an extra performance hit to F@H GPU load. Reduce such heavyweight CPU load even further, or avoid it altogether while you run F@H on GPUs.
If your processor has Hyperthreading (2-way SMT) enabled, the general advice is that you should deduct at least 2 logical CPUs (2 processor threads) from the CPU load for each GPU that you want to use in F@H.
F@H normally lets its GPU feeder processes run at lowest process scheduling priority. It may be marginally beneficial to increase this priority, but whether or not this really has any effect also depends on the OS, the CPU type, and the kind of concurrent CPU workload. F@H's process priority can be influenced through config.xml entries which are not very well documented.
Edit, my criticism of the Windows process scheduler is mostly about Windows 7 which I use myself. I have not used Windows 8/ 8.1/ 10 for computing loads and GPGPU loads personally yet. But from what I read, scheduler policies have not changed a lot from Windows 7 to 10.
my criticism of the Windows process scheduler is mostly about Windows 7 which I use myself. I have not used Windows 8/ 8.1/ 10 for computing loads and GPGPU loads personally yet. But from what I read, scheduler policies have not changed a lot from Windows 7 to 10.
Linux Mint increased my FAH production 20-25% over Win 10. So now I only use Linux for dedicated Folding Rigs. No experience with BOINC projects. But will probably run SETI later this year for the WOW event.
Linux Mint increased my FAH production 20-25% over Win 10. So now I only use Linux for dedicated Folding Rigs. No experience with BOINC projects. But will probably run SETI later this year for the WOW event.
I've done this on Ubuntu, but since Mint is based on Debian and Ubuntu this should also work:
If you want headless:
sudo apt update && sudo apt upgrade
sudo apt install boinc-client
You may have to install these to get GPU compatibility:
sudo apt install boinc-client-opencl
sudo apt install boinc-client-nvidia-cuda
To get boinc over the network and view things in a GUI not on the PC:
Edit cc_config.xml file:
Here is a sample of mine, I have mine to immediately report on task completion and I have increased the number of transfers happening from 2 to 8 and max transfer from 4 to 20. Edit this as needed, or you can use mine
Use sudo nano /var/lib/boinc-client/cc_config.xml
After this you will want to:
sudo less /var/lib/boinc-client/gui_rpc_auth.cfg
This is the password to the RPC port of BOINC.
Now on your machine which you want to administer BOINC from, get the IP of the machine with BOINC installed and input. You should now be able to use BOINC from another computer. If you need images to guide you I can do that after I get sleep. And you'll hear from me in 12 hrs time. About 5pm Australian Eastern Daylight Saving time
If you want a head:
sudo apt install boinc
Linux Mint increased my FAH production 20-25% over Win 10. So now I only use Linux for dedicated Folding Rigs. No experience with BOINC projects. But will probably run SETI later this year for the WOW event.
Some BOINC GPU applications already achieve very high GPU utilization on Windows. Others have low utilization similar to Folding@Home. Most of the time this can be worked around by running two or even more jobs on the same GPU at a time.
Right now I am running GPUGrid on a Windows machine with 1080Ti and have ≈50 % utilization with 1 job/GPU, and ≈75 % utilization with 2 jobs/GPU. I want to test whether GPUGrid works better on Linux, but need to wait until they fix a little problem with a built-in self destructor of the GPUGrid application.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.