Would have loved to seen OC'd 390Xs in CF as a worst case scenario, but with a single card well short of 500, I can't imagine a CF setup being over 800.
Very interesting article.
Ya, you are going to need to have heavily overclocked + overvolted GPUs + an overvolted and overclocked 6-10 core CPU. A lot of PC gamers make the mistake of taking maximum TPD or power consumption and start to manually add it up. It usually goes something like this: My CPU uses 150W, and each of my cards uses 250-300W so that means at peak my system will use 150+ 300Wx2? No, because there is no 'mainstream' real world application that uses 100% of the CPU + 100% of each GPU simultaneously. The only programs that can recreate such scenarios are distributed computing. The guys who run distributed computing on their CPUs+GPUs (say Folding @ Home, Seti@Home, Prime Grid, MilkyWay@Home, etc.) are already well researched and made the commitment to this cause to know well in advance the extreme power supply requirements necessary for system stability under such stress.
Hardware Canucks tested R9 295X2 with this system:
Processor: Intel i7 4930K @ 4.7GHz
Memory: G.Skill Trident 16GB @ 2133MHz 10-10-12-29-1T
Motherboard: ASUS P9X79-E WS
I suppose if someone is doing video rendering/encoding on a 6-10 core CPU and is playing a game, then it's possible to get 99-100% CPU usage but you aren't going to get 100%x2-3 SLI/Tri-SLI scaling in games either way. Also, there is a complex relationship between CPU & GPU bottlenecking and driver efficiency that itself impacts power usage of CPU vs. the GPU in the build. PC users just need to be aware on what exactly they are doing with their PC.
:thumbsup:
Something else I want to bring up -- how imo the entire PC reviewing community is not measuring performance/watt from an end user's/gamer's point of view. Instead, they use what I call the "Engineer's performance/watt perspective". What does that mean? An engineer designs a particular component with certain perf/watt characteristics and goals. An end user/gamer does not use just one component (tool for simplicity's sake) to achieve the end result. In practice, that means when we use our PC for some application, whether it's rendering, video games, watching movies, etc. perf/watt to us the end users:
End User's or Gamer's Performance per Watt = Productivity, Work result or Measured Performance / Total or Full system power usage
This is a very important concept. Here is why.
Any individual component may have superior perf/watt to another but that alone doesn't tell the full picture to the end user since we cannot run games on just the CPU or just the GPU or just the motherboard or just the HDD/SSD. We need all of these components in order to generate a visual frame.
i7 4790K
950 = 65% / 199W = 0.326
960 = 75% / 216W = 0.347
280 (~ let's assume 270X performance) = 75% / 231W = 0.325
380 = 88% / 260W = 0.338
380X = 96% / 300.8W = 0.319
vs.
390 (~ assuming 290X performance) = 133% / 341W = 0.390
http://www.techpowerup.com/reviews/ASUS/R9_380X_Strix/23.html (using 1080P as just an example to illustrate the point).
The whole point about NV "blowing AMD out of the water in perf/watt" is often Engineer's Marketing because in practice you cannot just take a stand-alone graphics card, put it on your desk and launch a game on it, can you? The difference in perf/watt between a Core i7 4790K rig when paired with a 950/960/280/380/380X is practically non-existent. Surprised?
As you can see, the perf/watt the End User or Gamer experiences when using the system is NOT perf/watt of any individual component. So why do 100% of professional reviewers keep feeding us the Engineer's Perf/watt metrics? It's mostly irrelevant for choosing a proper PSU choice or figuring out your total system efficiency from an end user's point of view -- i.e., assessing how much performance we are getting in an app for a given amount of total system power.
What reviewers need to focus instead is Total System power usage (that helps me to calculate my exact costs of running an application) and End User's Perf/Watt metric to assess the efficiency of the entire system in a particular application because a PC is a collection of parts, not any one part, that must work together to produce some measured end result. They need to do this for whatever application they are testing. If they are trying to assess what is the most efficient rig for rendering, encoding, they need to test the entire system's power usage, not just the CPU's. If they are trying to assess the most efficient gaming rig, they need to test the entire system's power usage and calculate the End User's Perf/Watt ratings respectively.
It's time we stand up and call them out on this marketing BS and start asking them to include the End User's perf/watt in benchmarks and if they choose to keep the Engineer's perf/watt, hen fine include it for NV/AMD/Intel engineers and marketers among the readers but give the rest us of useful results that matter!
Most importantly, using my suggested approach, a gamer would no longer have to guess and estimate how much total system power his i3/i5/i7 or AMD rig will use with any modern GPU since the data will be there. Think about this, so many gamers are freaking out if their i3/FX8300 would work with a 400W PSU if adding an R9 280X/290/980Ti? Well, with Total system power usage, you never have to guess and with End User's Perf/watt benchmark, you always know the most efficient gaming rig.
If you just look at Videocard perf/watt, it does not conclusively tell us what the most efficient gaming rig is while if you just look at the stand-alone videocard power usage, it also doesn't tell you what the actual total system power usage is, which makes it much harder to estimate the bare minimum PSU required.
For years I've been trying to explain this and not one review site is doing this as far as I am aware.
There are many crap PSUs which have 1000W but can't consistent supply 60w from the 12V rail, so yeah we need to see the video card wattage up there.
Right, but isn't this is a function of older generations of PSUs? IIRC, even bronze PSUs that are being sold today and are mainstream are fairly good on this.
And this is obscuring the fact that you can get high quality PSUs like the EVGA G2 series for much less today than even 5 years ago, so a lot more mainstream buyers are getting awesome PSUs today than before. It's no longer unreachable for the mass market.
But maybe I'm wrong? I'd love for you to expand on this.
The problem with this concept is you need to test lots and lots of games to get a conclusive result.Ya, you are going to need to have heavily overclocked + overvolted GPUs + an overvolted and overclocked 6-10 core CPU. A lot of PC gamers make the mistake of taking maximum TPD or power consumption and start to manually add it up. It usually goes something like this: My CPU uses 150W, and each of my cards uses 250-300W so that means at peak my system will use 150+ 300Wx2? No, because there is no 'mainstream' real world application that uses 100% of the CPU + 100% of each GPU simultaneously. The only programs that can recreate such scenarios are distributed computing. The guys who run distributed computing on their CPUs+GPUs (say Folding @ Home, Seti@Home, Prime Grid, MilkyWay@Home, etc.) are already well researched and made the commitment to this cause to know well in advance the extreme power supply requirements necessary for system stability under such stress.
Hardware Canucks tested R9 295X2 with this system:
Processor: Intel i7 4930K @ 4.7GHz
Memory: G.Skill Trident 16GB @ 2133MHz 10-10-12-29-1T
Motherboard: ASUS P9X79-E WS
I suppose if someone is doing video rendering/encoding on a 6-10 core CPU and is playing a game, then it's possible to get 99-100% CPU usage but you aren't going to get 100%x2-3 SLI/Tri-SLI scaling in games either way. Also, there is a complex relationship between CPU & GPU bottlenecking and driver efficiency that itself impacts power usage of CPU vs. the GPU in the build. PC users just need to be aware on what exactly they are doing with their PC.
:thumbsup:
Something else I want to bring up -- how imo the entire PC reviewing community is not measuring performance/watt from an end user's/gamer's point of view. Instead, they use what I call the "Engineer's performance/watt perspective". What does that mean? An engineer designs a particular component with certain perf/watt characteristics and goals. An end user/gamer does not use just one component (tool for simplicity's sake) to achieve the end result. In practice, that means when we use our PC for some application, whether it's rendering, video games, watching movies, etc. perf/watt to us the end users:
End User's or Gamer's Performance per Watt = Productivity, Work result or Measured Performance / Total or Full system power usage
This is a very important concept. Here is why.
Any individual component may have superior perf/watt to another but that alone doesn't tell the full picture to the end user since we cannot run games on just the CPU or just the GPU or just the motherboard or just the HDD/SSD. We need all of these components in order to generate a visual frame.
i7 4790K
950 = 65% / 199W = 0.326
960 = 75% / 216W = 0.347
280 (~ let's assume 270X performance) = 75% / 231W = 0.325
380 = 88% / 260W = 0.338
380X = 96% / 300.8W = 0.319
vs.
390 (~ assuming 290X performance) = 133% / 341W = 0.390
http://www.techpowerup.com/reviews/ASUS/R9_380X_Strix/23.html (using 1080P as just an example to illustrate the point).
The whole point about NV "blowing AMD out of the water in perf/watt" is often Engineer's Marketing because in practice you cannot just take a stand-alone graphics card, put it on your desk and launch a game on it, can you? The difference in perf/watt between a Core i7 4790K rig when paired with a 950/960/280/380/380X is practically non-existent. Surprised?
As you can see, the perf/watt the End User or Gamer experiences when using the system is NOT perf/watt of any individual component. So why do 100% of professional reviewers keep feeding us the Engineer's Perf/watt metrics? It's mostly irrelevant for choosing a proper PSU choice or figuring out your total system efficiency from an end user's point of view -- i.e., assessing how much performance we are getting in an app for a given amount of total system power.
What reviewers need to focus instead is Total System power usage (that helps me to calculate my exact costs of running an application) and End User's Perf/Watt metric to assess the efficiency of the entire system in a particular application because a PC is a collection of parts, not any one part, that must work together to produce some measured end result. They need to do this for whatever application they are testing. If they are trying to assess what is the most efficient rig for rendering, encoding, they need to test the entire system's power usage, not just the CPU's. If they are trying to assess the most efficient gaming rig, they need to test the entire system's power usage and calculate the End User's Perf/Watt ratings respectively.
It's time we stand up and call them out on this marketing BS and start asking them to include the End User's perf/watt in benchmarks and if they choose to keep the Engineer's perf/watt, hen fine include it for NV/AMD/Intel engineers and marketers among the readers but give the rest us of useful results that matter!
Most importantly, using my suggested approach, a gamer would no longer have to guess and estimate how much total system power his i3/i5/i7 or AMD rig will use with any modern GPU since the data will be there. Think about this, so many gamers are freaking out if their i3/FX8300 would work with a 400W PSU if adding an R9 280X/290/980Ti? Well, with Total system power usage, you never have to guess and with End User's Perf/watt benchmark, you always know the most efficient gaming rig.
If you just look at Videocard perf/watt, it does not conclusively tell us what the most efficient gaming rig is while if you just look at the stand-alone videocard power usage, it also doesn't tell you what the actual total system power usage is, which makes it much harder to estimate the bare minimum PSU required.
For years I've been trying to explain this and not one review site is doing this as far as I am aware.
I cheaped out on my last PSU, Lepa G 1600 and it failed after 2.5 years of use. Got myself an EVGA Supernova 1600 T2 now and its rock solid.
http://forums.anandtech.com/showthread.php?t=2455424
Great article, thanks for sharing. I do think people go way overboard with stating power requirements and I hope this clears it up.
Would have loved to seen OC'd 390Xs in CF as a worst case scenario, but with a single card well short of 500, I can't imagine a CF setup being over 800.
The Lepa G1600 isn't a cheap PSU at all. It's pretty much the exact same unit as the Enermax MaxRevo 1500, and was one of the best 2HP supplies available a few years ago. There's better Platinum and Titanium units available now, but they're still pretty rocksolid units IMO. Anecdotal I know, but I've had a couple running on 240V outputting ~1500w 24/7 since 2012 and they're still working.
I confess that I am that guy that only buys the cheapest power supply possible to satisfy the minimum requirements. I don't believe I have ever spent over $60 on a PSU. In over 20 years, not one failed PSU, fried component, or fire.
I am not an LN2 or quad-SLI/CF type person either, so my use cases aren't the extreme. My most extreme system was probably my OC'd Q6600 with GTX 470 SLI. Since then, I just do simple air OC and a single OC GPU.
3/5 rating on new egg and with the experience I had for the past 3 years I would give it a 2. It is my second RMA. Been shutting down for a while even during light load. It is a whinning unit.
Didnt like it much except for its small form factor.
The EVGA superNova 1600 is freaking awsome and solid.
I did measure my total system power consumption (minus monitor obviously) the other day with a Wattmeter and was surprised by it's relatively moderate power draw.
My system:
Core i7 3770 "Ivy Bridge" w/o OC
16Gb RAM
MSI R9 290 4G w/o OC
Bunch of HDDs and other crap
Idle Desktop usage was around 65-70 Watt.
Unigine Valley (Ultra settings, 1080p, 8x MSAA, 60 fps cap via RTSS)
~320-330 Watt @100% GPU usage @Stock clocks (977Mhz)
~250 Watt @ 850Mhz downclock, still barely manages the same framerate.
Fallout 4 and Unreal Tournament (UE4) test with 60 fps cap and High-to-Ultra settings:
~250 Watt @Stock clocks (977Mhz)
~220 Watt @850 mhz downclock
All in butter fluid 60 fps. I'm happy with that.
So I'm still lost then. Why do people recommend at LEAST 800+ W for crossfire with the 290?
Shouldn't I be fine with ~700W PSU? I want to try, I just don't want to end up getting unlucky and having my whole system die.
To be fair, even the AX1500i has 4/5 eggs and that's pretty much considered the best 1500W+ supply available. I won't disagree with you that the new Superflower units are awesome, just disagreeing that the G1600 is a cheap PSU. It really isn't, and it was a better unit (IMO) than many of the other cheaper big PSUs at the time like the Rosewill Lightning 1300 and Thermaltake Toughpower. Sucks that you've had so much trouble with it though.