Heatsink Reviews: Notes and Comments

BonzaiDuck

Lifer
Jun 30, 2004
16,742
2,096
126
I've been asked to assume responsibility for the Heatsink Review Sticky. And there will probably come a time when other matters in my own life will prompt me to pass on the duty.

I wonder if those of us who visit this forum aren't absolutely nuts, because we're constantly looking for improved cooling solutions even for viable system-builds that we've deemed adequate. The benefits and costs of water-cooling are pretty clear, and yet there is a sizeable number of us who stick with air-cooled technology. It worked well for some very fast Porsche sports-cars, and we may not have seen the limit of air-cooling solutions for computers. But water-cooling is most likely to remain the most effective means of cooling, even as air-cooling through heatsinks and heatpipes continue to improve and reduce the gap between the two methods.

The companion thread to this "Notes and Comments" provides a catalog of links on Heatsink Reviews. Unless you squander ever dollar you have on each and every new product that appears in the market so that you can test it yourself, online reviews that include benchtests and comparisons with other coolers are your best source of information. And for the alternative I already mentioned, they are also the only practical option for making informed choices, barring opinions obtained from colleagues here at the forums.

I intend to post links to as many reviews as possible that provide useful information. I mean that two ways. A review may make lackluster conclusions about a heatsink, or give it a stellar rating. But a review may also be fraught with hype; it may damn a heatsink cooler that doesn't deserve it, or it may provide undeserved kudos.

I've been advised to avoid making overt evaluations of reviews, but it has also been suggested that we'll only post "good" reviews. Let me say this.

Some reviews may fall short of objectivity, they may be biassed for failing to include comparisons to selected products, and there are any number of reasons why the review's conclusions alone are not to be trusted. Yet, those same reviews may include useful information.

Certainly, forum readers would be the ultimate judge of these things. And further, I may not have time to read each and every review link that we acquire.

I'll depend on forum members to pass on links of newly published reviews. If you find something worthy of our interest here, send me a PM with the review link, and I'll add it to our list. If you want to post comments, don't post them in the "catalog" thread: post them here.

COMMON FACTORS USED TO JUDGE HEATSINKS

The purpose of a heatsink or heatpipe cooler is to reduce CPU and VGA temperatures. There are very clear ways to measure that performance, although test conditions may skew the results slightly one way or another.

But users also have other criteria. The cooler has to fit the existing or contemplated configuration of motherboard, VGA card and fan deployment. Some sizes and shapes of coolers may fit some computer cases better than others. And some coolers just "look nicer." Some cooler and fan combinations are quiet, and others are noisier.

To measure performance, reviewers only occasionally resort to measuring a cooler's thermal resistance. Perhaps this is because of the possible confusion about what thermal resistance "means," and the fact that lower thermal resistance means higher performance. Reviewers may not wish to confuse readers.

A cooler's thermal resistance is defined as the difference in temperatures measured at the fins where heat is transferred to air -- and at the heatsink base or thermal interface -- per thermal watt of energy dissipated.

If, for instance, I know a bench-configuration's processor "TDP" or thermal design power at load conditions, and if I know the thermal resistance, I should also have an idea about processor equilibrium temperature under load -- adjusting for room ambient. So thermal resistance encapsulates a lot of information in a single index.

Thermal resistance contains the same information that can be found in columns showing idle and load temperature results for known processor over-clock settings (and thermal wattage), and for practical purposes, it is independent of ambient temperature which needs to be mentioned in reviews providing the idle and load temperatures and other test conditions. You also don't need to know the test-condition thermal wattage of the processor -- if you can trust the thermal resistance measured by the reviewer -- because the index encapsulates that information as well.

Room ambients are important, because room temperature changes will affect all the temperatures measured for a system. If the room ambient rises 1C (or 1.8F) degrees, the idle and load temperatures measured for a heatsink should rise by an equal amount.

A heatsink that has a lower thermal resistance relative to another cooler is the more effective cooler. For instance, if one were choosing between a heatsink that showed a thermal resistance of 0.12 C/W and a heatpipe cooler with a thermal resistance of 0.10 C/W, one might prefer the latter as more effective. One might be assured that the second cooler would show lower processor temperatures under full load. And the differences in the second and third decimal places are not insignificant. Consider the simple formula:

TR = (T[c2] - T[c1]) / W

Where W is thermal wattage, T is the temperature in centigrade measured at the two positions "1" and "2". If the thermal wattage is even 120, 150 or 200W, multiplying it by the thermal resistance will show a significant impact on the temperatures. A difference in the TR values, even in the second decimal place, may mean a narrowing or widening of temperature differences by several degrees.

A much wider set of comparison reviews for coolers do not provide thermal resistance measures, but they do provide the equivalent in a set of temperature results under controlled conditions so that readers have an adequate profile of performance. Ideally, two or more coolers would be compared using the same processor and motherboard, in the same case, with the same room-ambient temperature, with the same processor clock settings, with the same stress-test software. Be suspicious of reviews that only report idle/load temperatures, and fail to explain the test conditions in thermal wattage (or over-clock settings), processor-type, and room-ambient.

Of course, thermal resistance will vary with the throughput of air through a cooler's fins in CFM -- cubic feet per minute -- which, in turn, rises with fan speed. Increases in fan speed may mean more noise. So some reviewers measure both the decibel level and performance at a "high" and "low" fan-speed setting. This would seem to be the case whether the reviewer reports thermal resistance, or the equivalent temperature and control-condition data.

In comparing two or more heatsinks/coolers, it shouldn't much matter whether thermal paste A or B was used, as long as the same paste was used for each cooler comparison test in the set. It shouldn't matter if the coolers are tested with a particular processor at only "stock" speed settings, as long as all the coolers tested are subjected to the same conditions, but a comparison of several coolers using a battery of different chosen clock-settings applied to each and every test would be useful to show the extent of over-clocking possible on one cooler versus another. And it shouldn't matter which stress-test software is deployed, as long as the same stress-test software is used for each and every heatsink or cooler in the comparison project.

There are other "items" which can be subjected to "thermal resistance scrutiny," like thermal interface pads, thermal grease and thermal paste. Water-cooling kits can be evaluated in terms of overall thermal resistance. And some critics may argue that testing cooler A in a package which includes a tube of TIM "a" should use that paste in a comparison with cooler B which includes a tube of TIM "b." That's a matter of personal preference, and DIY'ers who buy their own choice of TIM would also prefer a simplified comparison test between coolers using the same thermal interface material.

Of course, thermal resistance will vary with the throughput of air through a cooler's fins in CFM -- cubic feet per minute -- which, in turn, rises with fan speed. Increases in fan speed may mean more noise. So some reviewers measure both the decibel level and performance at a "high" and "low" fan-speed setting. This would seem to be the case whether the reviewer reports thermal resistance, or the equivalent temperature and control-condition data.

In addition to effectiveness, size, appearance, and noise level, some heatsinks are easier to install, and others are more difficult to install. Usually, heatsink reviews address this issue.

Cooling performance may be the most important factor to me, but it is not the only consideration. Buyers may want the cooler to fit in their case a certain way, to weigh less than some limit in grams. They may want to avoid coolers that don't come prepackaged with fans, or which experience has shown to need lapping in order to bring the cooler to its highest level of performance. They may want a cooler that has a "bling-bling" appearance. They may want a cooler that costs less than some number of dollars. If the difference in performance between two coolers is narrow, or if the buyer isn't more than casually interested in how far he can over-clock his system, then he will make rational trade-offs of performance for these other factors.

And that's his business -- not mine.


HOW TO USE BENCHTEST REVIEWS TO CHOOSE A HEATSINK

I can make this short and sweet.

"Do not depend on a single review for your decision to purchase or use a heatsink."

This is probably a wise idea for buying decisions in general.

If a review provides no basis of comparison to any other products, any value to be had from that review would only be information that would augment your understanding when you read another review.

Sometimes, we may suspect that review publishers only include a comparison product in tests if the manufacturer has purchased advertising from the publisher. Suppose the focus of the review X is "heatsink A." The review may offer a comparison of products, and you notice that those products -- call them "heatsink B" -- had been previously reviewed by the publisher, while the new product's advertisement appears in the current review publication. But other products -- call them "heatsink C" -- which you know to be current, popular and offering stellar performance, aren't mentioned. Other review sources -- "review Y" -- that do not contain the new product may be more objective, with comparisons to one or more of the products also mentioned for comparison in the first review. So, for comparison purposes, "Y" may make "B" and "C" comparisons.

The ranking of these competing products becomes a point of reference for comparing heatsink A with heatsink C.

If you can only find a single review with benchtests of some particular cooler, then your need to find a second or third review covering the same cooler decreases as the number of different coolers tested in the single review increases.

If you can't find information about the size, shape or other features of a heatsink or cooler, find a link to the manufacturer's product page -- it should be there. For the most part, several heatsink manufacturers make it a policy to avoid providing their own thermal resistance or performance results, leaving that task to independent reviewers. So an objective measure of performance is less likely to be found at the manufacturer's own web-site. And if it were, would you trust that information without confirmation of other sources in making your purchasing decision?