Which projects create the most heat? And are PCIe bandwidth limited?

Fardringle

Diamond Member
Oct 23, 2000
9,188
753
126
We are entering the hot season for the northern hemisphere, so I thought it might be interesting to compile a list of the projects that create the most at least amount of heat.

I have only tested a handful so far, but here are my beginning contributions to the list. These were all tested with a Ryzen 9 3900X (Scythe Fuma 2 cooler) and a GTX 1060 3GB, with the CPU and GPU running at 100% utilization and ambient temperature around 70-75F (21-24C). Most are done in Windows, with the few that don't have Windows apps (like Gaia@home) in a Linux VM on the same machine. GPU projects were all run with 1 task at a time, CPU projects with 22 tasks at a time leaving 2 threads free for the GPU.

The GPU tasks were tested with vdwnumbers running on the CPU since I know it's a low heat producer, and the CPU tasks were tested with Einstein running on the GPU since it is also a low heat producer, to help prevent inaccurate measurements due to extra heat from the other components.

The temperatures are an average over at least an hour of runtime. I have both the CPU and GPU configured to throttle themselves at 85C currently, just to try to avoid too much heat in the house, so anything marked 85C would probably go even higher if I let it.

This is just a starting list so I'll add more when I'm in the mood to test more and have additional numbers.

If you have any numbers to add, or any projects you want me to test with my "baseline" system for comparison, feel free to add your input to the thread. Temperatures for the same projects but on different platforms (Intel CPU and AMD GPU for example) would also be interesting.

GPU:
Amicable Numbers - 70C
Collatz - 76C
Einstein@Home - 62C
MilkyWay@Home - 82C
Minecraft@Home - 66C
MLC@Home - 53C :astonished:
Moo! Wrapper - 70C
PrimeGrid (AP27) - 82C
PrimeGrid (GFN 17 Low) - 69C
PrimeGrid (PPS Sieve)- 78C
Private GFN Server - 82C
SRBase - 75C

CPU:
BEEF@Home - 85C (capped)
Gaia@home - 72C
iThena.Computations (CPU intensive) - 66C
latinsquares (ODLK1) - 72C
MilkyWay@Home - 70C
Minecraft@Home - 73C
NanoHub (with only 12 tasks running due to RAM requirements) - 83C
ODLK - 75C
PrimeGrid (GFN 17 Low) - 70C
Rosetta@home - 76C
SiDock@Home - 76C
The Ramanujan Machine - 65C
Universe@Home - 69C
vdwnumbers.org - 63C
Yoyo@home - 74C (running a random mix of all applications)

------------------------------------------------------------------------------------------------------------------------------------------------------------------
As I mentioned in Post #15 I am going to add info about which projects are PCIe bandwidth limited in slower PCIe slots since I have a quad GPU system (4x Quadro K2200) that IS limited in two of the slots.

For this list, I'll note whether there appears to be any limiting at all, and list average run times to show how much of an effect the slower slots have on the tasks. Cards 1 and 2 are full PCIe 16x 3.0 speed, card 3 is 8x 3.0, and card 4 is 4x 2.0.

Amicable Numbers - Not limited. But it does need 8GB of RAM for each GPU, in addition to RAM for the OS and anything else that is running on the computer.

Collatz - Slightly limited. I'm not 100% sure since Collatz tasks don't include the GPU device number in the results/logs like other projects do, so I'm basing this only on watching the run times in the BOINC client instead of using averages over several days. Cards 1, 2, and 3 all have virtually identically processing times, averaging around 1.25 hours per task. Card 4 is a bit slower, averaging around 1.5 hours per task. So possibly a small amount of bandwidth limiting on the PCIe 4x 2.0 slot.

Einstein@Home (Gamma Ray Pulsar) - Not limited.

MilkyWay@Home - Not limited.

Minecraft@home (Trailer Thumbnail 2 app) - Limited. Cards 1&2: 3.2 hours. Card 3: 3.5 hours. Card 4: 4.2 hours. Not as much of a spread as with MLC@Home, but still a noticeable difference on the slower PCIe slots.

MLC@Home - Definitely limited. Cards 1&2: 1.6 hours. Card 3: 2.1 hours. Card 4: 3.5 hours.
-----MLC@Home with a CPU project running on 50% of the CPU cores - All four cards average about 3.7 hours per task. So not only is the project limited by slow PCIe slots, but additionally limited by having anything else running on the CPU at the same time.

Moo! Wrapper - Not limited.

PrimeGrid - AP27, not limited.

Private GFN Server - Not limited.
 
Last edited:

crashtech

Lifer
Jan 4, 2013
10,523
2,111
146
I noticed that vdwnumbers (which I'd never run until recently) does not seem to stress the CPU much. It seems like it would be a good companion to running a GPU app, or maybe even to one of those resource intensive CPU projects that have big RAM requirements and therefore can't use all cores.
 
  • Like
Reactions: Fardringle

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,542
14,496
136
While the hotter CPU/GPU temps do take more power than the other apps, its well to know that each CPU at its current config give a better idea of the total heat production. For example,, my 2 ghz 32 core EPYC boxes run at a far lower heat production than my 2990wx's do. Both the same tech, same number of cores, but the 3 ghz speed of the 2990wx, takes about 3 times the power and 3 times the heat at only a fraction more temp. That is why I choose the EPYC, as they are factory "underclocked/undervolted" so to speak.
 
Last edited:

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,542
14,496
136
EPYCs are really great for what they do, and I would love to have some!

Unfortunately, there is one thing that they do not do, and that is fit in my budget.. :p
OK, here is a 16 core. I offered like 300 or 350, and he accepted, but I also offered on the 24 core and he accepted.

So 16 cores $350 ? Even at half speed, that like an 8 core 5 ghz ! (maybe 4.8) And a motherboard is $360.

Edit, if you do want to consider EPYC, you have to scan ebay all the time and learn how low, lowball offers can be. The lone I linked was $450, and I am not sure if he accepted $300 or $350, but you get the idea.
 
Last edited:

Fardringle

Diamond Member
Oct 23, 2000
9,188
753
126
To be fair, I should be clear and say that ALL new computer upgrades are out of my budget at the moment. I'd like to eventually put together an EPYC system, though. :)
 
  • Like
Reactions: Markfw

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
All of my CPUs have uninteresting temperatures too. Among them are Intel 14nm desktop CPUs. They have uninteresting temperatures because I switched TurboBoost off in the BIOS.

My GPUs have uninteresting temperatures too. This is because very early when I started to use GPUs in Distributed Computing, I got fed up with radiators which are crammed onto the graphics card. These just don't work for me. So I replaced them with radiators which are not only larger but push air directly out of the computer case, or are placed outside of the computer case in the first place.

Until recently, VRM temperatures and RAM temperatures of some of my computers (ones with server mainboard) still managed to pique my interest. I got them mostly in check now by means of increased air flow across the mainboard, plus several smaller fans placed directly on top of VRMs and some DIMMs.

There is just one pair of temperatures remaining which I still monitor frequently — room temperature, and external temperature. :-)
__________

However, there is a different but related measure which I do tend to monitor for each computer: Power draw. Particularly, power draw at the wall — not just because of its significance (as a driver of room temperature, and because of its effect of thinning the wallet, and to avoid overuse of a circuit), but also out of convenience, as it is displayed continuously by cheap but functional power meters, that is, without software assistance.

Some time ago I had fewer power meters than computers, which was a nuisance. Eventually I took care to have more power meters available than computers. Now all computers which are dedicated to DC are always plugged into a meter, even if I move a computer to a different circuit or room. All power strips which have several computers attached have their own power meter too.
__________

As for heat output per project, it differs quite a bit for my Intel CPUs indeed, since I run these CPUs at fixed clock rate. The AMD CPUs and the Nvidia GPUs run at variable clocks, with lesser (but still existing) differences between projects regarding their heat output.
 
Last edited:

Fardringle

Diamond Member
Oct 23, 2000
9,188
753
126
I know that my list doesn't necessarily apply to other systems and setups. I just thought it might be useful to make a "baseline" list using the same equipment for all tests to see which projects might cause more stress for the cooling in our homes (specifically, in my home). :)

In particular, I'm looking for which projects I do not want to run on the hottest days...

I do have my 3900X limited to the base 105 Watt power limit. It still 'boosts' itself to around 4.0-4.1Ghz from the base 3.8Ghz if it isn't too warm, but it isn't allowed to go crazy like it would without the power limit. I also have the GTX 1060 undervolted to a max power of 975mV instead of the 1070mV it would use at stock. Again, it still boosts itself a bit to 1860MHz core instead of the stock 1835MHz boost even with the power limit. But both are power limited AND temperature limited so they can't go crazy on power or heat and should hopefully give realistic results for my test since the other factors can't really change much.

I think it's interesting to see a pretty wide range of temperatures in different projects with exactly the same hardware, same settings, and same ambient temperature.
 

Assimilator1

Elite Member
Nov 4, 1999
24,120
507
126
When I get a chance I'll see if I can post numbers (I know I did record some).

But I found that Asteroids@home was the hottest running CPU project that I've run, and IIRC LHC is the 2nd hottest.
 

Fardringle

Diamond Member
Oct 23, 2000
9,188
753
126
NanoHub ran out of tasks again, so I'm back to doing other stuff on my main machine. I just tested Moo! so it's on the list in the first post now. GPUGrid and WCG don't have any GPU work available. Am I missing any other GPU BOINC projects?

There are lots of CPU projects to be tested. I'll add them to the list when I get around to testing more of them.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,326
10,034
126
I chose to disable CPB (Core Performance Boost, basically "Turbo"), and PBO (basically, "extra turbo") on my Ryzen Zen2 CPUs, and I think even my 1600 Zen1 CPU, because they caused the temps and power usage to creep higher than my coolers could handle. (Well, my main rig is on 240mm AIO LC, but still it got really warm.)

Now they stay reasonable, and under reasonable power consumption (65W TDP, essentially).

Edit: Running something like PrimeGrid , which is a bit of torture-test for Ryzen CPUs, since they don't specifically throttle AVX/AVX2 like Intel does.
 

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
GPUGrid and WCG don't have any GPU work available.
But for different reasons, right? I haven't actually looked, but I suppose GPUGrid just doesn't emit GPU work, while WCG continues to emit only very little GPU work (which is vacuumed up by the large number of users who are ready to process GPU work, and by the presumably low number of users who aren't just ready but who actively maintain a high rate of work requests specifically for GPU work — the TN-Grid syndrome).
 

Fardringle

Diamond Member
Oct 23, 2000
9,188
753
126
But for different reasons, right? I haven't actually looked, but I suppose GPUGrid just doesn't emit GPU work, while WCG continues to emit only very little GPU work (which is vacuumed up by the large number of users who are ready to process GPU work, and by the presumably low number of users who aren't just ready but who actively maintain a high rate of work requests specifically for GPU work — the TN-Grid syndrome).
Right. I just thought I'd mention why I don't have any numbers for those projects..
 

Fardringle

Diamond Member
Oct 23, 2000
9,188
753
126
In another thread where I was trying to troubleshoot some strange behavior in a multiple GPU system running MLC@Home, OrangeKid suggested that since I have a "new" machine with multiple GPUs and that I know for a fact that two of the four GPUs are bandwidth limited due to their less than full speed PCIe slots, I should test all of the GPU projects to see which ones are slot limited, and by how much.

Specifically, this computer has four Quadro K2200 graphics cards. Two of them are in full speed PCIe 3.0 16x slots. Number 3 is in a "PCIe x16 slot (PCIe 3.0 wired as x8)" takes about 1.3x longer to complete the MLC tasks than the two full speed cards, and number 4 is in a "PCIe x16 slot (PCIe 2.0 wired as x4)" and takes about 2.25 times longer to complete the MLC tasks than the full speed cards. Since these pretty slow GPUs are limited, it should translate that fast GPUs will also be limited in slower PCIe slots.

I'll go through and test the various projects when I have time and will add that info to the first post when it is available.
 

StefanR5R

Elite Member
Dec 10, 2016
5,498
7,786
136
Note that the K2200 cards a) use only PCIe 2.0, b) due to their relatively low computational throughput, will only use moderate PCIe bus resources in applications which aren't pathological cases like MLC@Home's application seems to be.
 

Fardringle

Diamond Member
Oct 23, 2000
9,188
753
126
Note that the K2200 cards a) use only PCIe 2.0, b) due to their relatively low computational throughput, will only use moderate PCIe bus resources in applications which aren't pathological cases like MLC@Home's application seems to be.
It's not going to be a comprehensive test, just like the temperature tests. Just a test of what I have available, and sharing any of what I think are interesting results. And if any other projects do show signs of being bandwidth limited with these cards, then I think it's pretty safe to assume that faster cards will also have problems running those projects if they aren't in full speed PCIe slots.
 
  • Like
Reactions: Icecold

Icecold

Golden Member
Nov 15, 2004
1,090
1,008
146
If you do these tests @Fardringle it will definitely be helpful. I plan on doing similar tests with RTX 3070's (I have a build that will have them at x4 and x8), so it will be useful data to be able to see which projects are bandwidth limited on a lower 'computational throughput' card vs a higher 'computational throughput' card. I definitely look forward to seeing your results!
 
  • Like
Reactions: Fardringle