RussianSensation
Elite Member
- Sep 5, 2003
- 19,458
- 765
- 126
Should water cooling be required? Are you saying that, if you want to buy an enthusiast GPU, you have to accept watercooling? It does not work in every computer case, not everyone is ready to accept it. Yes, quite a few gamers WILL accept it, and it's a neat solution... but it shouldn't be a requirement to enter the enthusiast market. I find that to simply be taking things too far.
1. If a Hybrid WC is actually superior in noise levels and temperatures, which ensures better overclocking, and exhausting heat out of the case, then I have no problem with it being a "requirement" for a reference solution. Sure as hell beats the loud reference 7970/290X coolers or the thermal throttling mess that is the Titan Z.

2. There is no indication that no AIB will ever make an air cooled solution. It's possible we will see triple slot MSI Lightning and Asus Matrix, etc.
If their hardware is not mature enough to reach that performance level without requiring a custom AIO solution to simply stay within reasonable temp levels, they need to slow down until they can shrink it or reach higher efficiency, or they seriously risk letting Nvidia gain further marketshare.
You seem to think the AIO CLC is actually a negative but for many it's getting a far superior cooling solution for not much extra cost. The whole point of flagship cards is to push their performance to the absolute limit. In 2015 it will be a 300W R9 380X and in 2016/early 2017 a $350 180W card will be as fast or faster. For anyone who cares about electricity costs and power usage, well they can get slower 980/370X or wait for Pascal, etc. What you suggest is for AMD to purposely gimp the 380X to 225-250W because 300W is too much for you.
GM200 and 380X target the high-end market, where people should have a case and PSU to support such products. I certainly don't remember AT forum being so vocal about the 780Ti, a card that peaked 269W of power in reference form, and 286W of power on say the awesome EVGA Classified:

Also, don't compare AMD's to NV's TDP. They are not rated the same. NV's TDP is more or less underrated marketing BS. 780Ti and 480 easily exceeded their TDPs.
970 is rated at 145W TDP but reaches 190W+ at load. :sneaky:

Now if R9 380X is 30% faster than a 980 and a 980 is about 16% faster than a 970, we get 380X to be 51% faster than a 970.
192W x 1.51 = 290W. Sounds reasonable to me especially since in the context of the overall system power, the 380X should be way faster than a 970 but the overall system power usage will go up by less than the performance increase.
But if they only went up 7W but need AIO now, that tells you that the base was too high in the first place, and it seems the base measurements were based on a 95ºC reference if I have found the right sources, which is frankly ridiculous beyond argument IMO.
I am sure AMD could have made MSI Lightning 290X style cooler and gotten a similar 75C load temps, but still chose Hybrid WC for a few reasons:
1) It might have cost less to source 120mm from Asetek;
2) You still exhaust the heat out of the case
3) This solution is better for dual or even triple CF. How would you fit 3x triple slot MSI Lightnings in 1 case?
The reason R9 290X runs at 94-95C is because the reference cooler is inefficient. You already know that after-market R9 290X cards like MSI Lightning or Sapphire Tri-X / Vapor-X run at 70-75C.
I WANT AMD to make a damn good card, and this is in the right direction if current leaks are accurate. However, if the solution still requires an AIO cooler, we're treading in the wrong direction. No single-GPU card should ever REQUIRE an AIO solution.
I am sure the same was said by the hardcore Porsche purists before the 911 abandoned air cooling for water-cooled engines. Now look at where the 911 991 model is today compared to 2-3 generations ago 911s in terms of sports car performance.
Look at the AIO CLC market for CPUs that took off, despite Noctua NH-D15 and Phanteks PH-TC14PE beating a lot of the popular AIO CLC solutions in terms of noise levels vs. performance. It's been a clear trend over the last 5 years that CLC on CPUs has taken over the high-end heatsink market for the most part. Why can't we have 300W CLC flagship GPUs because CLC allows for flagship cards to be made far above the usual 250W TDP mark? I wouldn't even mind if they made 350-400W flagship cards with CLC. Give the market more choices/options. :thumbsup:
Dual-GPU? Sure, why not, that can be the cost of two GPUs on a single PCB sometimes. But I can't think of a reason why ANY flagship *standard market* computer part should require *extreme* cooling measures. Make no mistake: no water cooling system, of any variety, is anything but *extreme* in the consumer market.
I disagree. Kraken G10 GPU bracket and AIO CLC are not extreme cases in the DIY market. Full custom loop, LN2, vapor phase change cooling, those are your top 3-5% of the market.
Frankly, I can't even work with them in my current case, not without being forced to go with an AIO cooler on my CPU too. This is especially true if I want to go SLI/Crossfire (which may be required for multi-monitor or 4K at ultra settings for awhile).
Anyone who is considering $1000-1300 of dual flagship GPUs surely can buy a $200 new case that will last 5+ years? There are always other options like waiting for 14/16nm Pascal, getting a 250W air cooled GPU, waiting for AIB after-market solutions, getting 980 SLI/370X CF instead.
You also need to take into account that if R9 380X and GM200 are really 35% or so faster than a 980 at stock, they won't be too far off 970 SLI. Sure they'll probably be 15-20% slower but a lot of people will take a single card that's 80-85% of 970 SLI performance to not deal with CF/SLI profiles.
I like the idea, but not the requirement. Get the AIB partners to release multiple solutions, a reference AIO cooler and whatever cooler they want, or, heck, two different "reference" coolers, leave it to the consumer to decide which they want. Do it like EVGA: they have some of their cards available in both ACX and the blower style.
I am not sure why you ruled out the possibility of a Vapor-X, MSI Lightning, Asus Matrix, etc. considering cards like 780Ti and 290X had them.
I did. That's where the 70º+ figure came from. And yes, if that is an accurate measurement, the coolers as they are should be able to handle that.
But those are not good temps for those cards to run at, plain and simple. They draw more power at that temp, which means they produce more waste heat.
I don't know better than an AMD or Intel or NV engineer. My laptop's 3635QM has been pushed to 99-100% nearly 24/7 load in distributed computing since February 2013, with max load reaching 93-94C and average load temperatures of 87-89C. This chip is rated at 105C by Intel and I expect it to not fail up to that level. If this chip runs at 15C or 88C makes no difference to me unless the heat is actually impacting the laptop's keyboard, which on my laptop it isn't by any degree that matters to me.
Maximum GPU temperatures
GTX280 = 105C
GTX580 = 97C
GTX680 = 98C
780Ti = 95C
GTX980 = 98C
The extra power usage at hot temperatures is probably a small factor overall, much less important than the impact on the Boost clocks. However, if the GPU or CPU is rated at 95-105C, I have no problem whatsoever running it 24/7 @ 100% load for 2-3 years at 85-90C. I've never had any GPU fail on me due to high temps and I've been running distributed computing for years and years on overclocked and overvolted NV/AMD cards. If the components are well made, they will handle 85-90C loads and not fail. VRMs can do even more.
Let's say if a GPU or CPU was made of exotic materials that hypothetically allowed it o run at 200C and function well. I bet most PC gamers will think that running such an ASIC at 125-150C would be ludicrous, but in reality that's only psychological. If from an engineering point of a view a chip is rated to 95C as a perfectly acceptable operating temperatures, it's fine. Tonga chips in the iMac Retina often hit 100-103C.
In fact, R9 295X doesn't start thermal throttling in the iMac Retina until about 106C.
http://forums.macrumors.com/showpost.php?p=20590035&postcount=554
This idea on our forum by experienced enthusiast that it's somehow bad to run GPUs at 80C or 90C is just a cautious opinion, far detached from the engineering point of view. The high temperatures do matter if the Boost clock is impacted but if it's not impacted until 100C for some other chip, it doesn't matter.
Also, you made a point how blowers are often preferred but while that might be true for MiniITX, it's not true for mid-size to large-sized cases. Even 980 SLI reference blowers start throttling without a custom fan curve, which raises their noise levels.
"We found that with the default settings on GeForce GTX 980 SLI the lowest clock rate it hit while in-game was 1126MHz. That clock speed is actually below the boost clock of 1216MHz for GTX 980. This is the first time we've seen the real-time in-game clock speed clock throttle below the boost clock in SLI in games. It seems GTX 980 SLI is clock throttling in SLI on reference video cards." ~ HardOCP
I know many on AT forums to this day won't admit the inferiority of blowers for high-end gaming systems, but as far as single flagship GPUs go in scientific benchmarks, blowers get destroyed in noise levels and performance by an after-market cooler like the Windforce 3X on a 250W TDP flagship card:
"In the automatic regulation mode, when the fans accelerated steadily from a silent 1000 RPM to a comfortable 2040 RPM, the peak GPU temperature was 78°C. It is about 20°C better than with the reference cooler and much quieter, too! Thats just an excellent performance for a cooler of the worlds fastest graphics card." ~ Xbitlabs
If a 120mm rad solves the temperatures, noise levels and exhausting hot air out of the case all at once, I think it should be the future of reference cooling for flagship 250-300W cards. As I said for those who want miniITX/MicroATX systems, there will always be mid-range 160-180W cards.
------
To re-iterate on GM200 vs. 380X, I think NV will have the edge at 1080P (lower CPU overhead/higher CPU multi-threading), cases where 4GB of VRAM is exceeded should be investigated at 4K in SLI/CF, and possibly overclocking headroom will give NV a big edge on cards like the classified. If GM200 operates at 250W and it can overclock 15-20% on stock voltage or with a minor voltage bump, I would suspect it will be much harder for the 380X to compete if it's a 300W chip already under water. We know that 295X2 wasn't a stellar overclock as AMD pushed it near the max. That's where Maxwell's efficiency could become the trump card as we've seen 750T, 960/970/980 are all great overclockers.
Last edited: