• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

NV 12VHPWR issues revisited

Page 17 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I keep my 4090 under 320W to be on safe side.... will be interesting what will they do on 5090 if they do nothing we are insane if we let that pass also the news and review sites will be guilty of they do not point that 12pin connector is failed tech
Mine is set to use all 600W. Although I doubt it gets near that in most titles.
 
I found that turning it all the way up to 600W seems to make it use more power without any increase in clocks or framerates. I keep it at 500W long term, most games are 300-400W and only a few RT games that can maintain high framerates at native 4K hit 500W, like Metro Exodus.
 
I found that turning it all the way up to 600W seems to make it use more power without any increase in clocks or framerates. I keep it at 500W long term, most games are 300-400W and only a few RT games that can maintain high framerates at native 4K hit 500W, like Metro Exodus.
I've seen 500w power usage playing Cyberpunk 2077 at 1440p with RT turned on. Most games run about 350w or less though. For my second AM5 system, I have a 4070 Ti Super in it, but that card doesn't pull enough wattage to ever be of meltdown concern for the connector.

Edit: One thing worth noting is I have PCI3.0 native connectors on both systems power supplies, so there are no 8-pin adapters being used. It's all straight pin to pin connections with the power supplies.
 
My 4080 Super is fine so far. Granted it's capped at 320W stock TDP so much less likely to run into the issue.

I am waiting on a cablemod 12VHPWR cable set for my eVGA P2 PSU to arrive to swap it over to my main rig for the summer. I'm currently playing less demanding titles which generally hit FPS cap for my monitor (240Hz) and the monolithic GPU design is superior for efficiency in those <50% load scenarios. With outdoor temps exceeding 100°F and the HVAC working overtime I'd considered just swapping to my laptop for a bit, but I like my 32" monitors... and the laptop GPU probably won't cut it for driving those.
 
Have you checked how much the fps improvement is in this game going from 450W to 500W?

I didn't check but the boost clocks are the same, 2950-ish, so it shouldn't matter. These cards are limited by chip lottery and power or cooling doesn't make much difference. The card just uses more power with the higher setting but the clocks stay the same.

One odd place where I get 500W (or 600W if I set it that high) is in certain underwater areas of Trine 5. This is a lightweight, non-RT game but somehow the water stresses the card. The rest of the game uses 250-300W.
 
One odd place where I get 500W (or 600W if I set it that high) is in certain underwater areas of Trine 5. This is a lightweight, non-RT game but somehow the water stresses the card. The rest of the game uses 250-300W.
Could be a bug. Report it and you could win a bug bounty!
 
I am waiting on a cablemod 12VHPWR cable set for my eVGA P2 PSU to arrive to swap it over to my main rig for the summer.
Didn't they recall all of those due to the impossibility of supporting the connector? Or is there a new version of the cable?
 
Update: The clearance between the side glass panel of my Lian Li O-11 Air and the 12VHPWR connector was not sufficient. While it did not dislodge enough to fry, it appears due to the strain on the connector (despite cable combs and avoiding bending near the connection itself) it was intermittently tripping some type of built-in protection causing all my fans to spin to 100% and hardlock the system. I have since removed the side panel on the case to allow zero strain on the cable and it seems okay so far.

I'm going to fault nVidia and AIBs on this one. There are MULTIPLE ways this could have been avoided.
1) slightly recessed connector/cutout on PCB
2) angled connector on PCB
3) connector facing down on PCB, or on the end, or other non-traditional location

Any of these would reduce strain near the connector endpoint which is IMO a setup for failure. I knew going in to be careful and it just isn't possible in my current setup to relieve the cable strain sufficiently to guarantee an adequate connection.
 
Update: The clearance between the side glass panel of my Lian Li O-11 Air and the 12VHPWR connector was not sufficient. While it did not dislodge enough to fry, it appears due to the strain on the connector (despite cable combs and avoiding bending near the connection itself) it was intermittently tripping some type of built-in protection causing all my fans to spin to 100% and hardlock the system. I have since removed the side panel on the case to allow zero strain on the cable and it seems okay so far.

I'm going to fault nVidia and AIBs on this one. There are MULTIPLE ways this could have been avoided.
1) slightly recessed connector/cutout on PCB
2) angled connector on PCB
3) connector facing down on PCB, or on the end, or other non-traditional location

Any of these would reduce strain near the connector endpoint which is IMO a setup for failure. I knew going in to be careful and it just isn't possible in my current setup to relieve the cable strain sufficiently to guarantee an adequate connection.
It’s just bad design, it doesn’t take into account the bigger cards need bigger cases too.

Also the 4080 Super is wider than the 4070 Super so that would mean more clearance is needed. It was manageable with the 4070 Super in my Thermaltake H18 but that’s only because it was a 4070S Inno 3D model, which is a lovely small card.

Any of the 4070Ti cards and up would have a terrible pain to install and maintain in my V18. They are so wide and big.
 
5090 TDP raised to 600W.

Who wants to bet that some OEMs will ship 450W cables in some of their boxes? Or even some 600W cables will get burned when using Furmark on these GPUs? Can Nvidia manage to release the most expensive consumer GPU with zero drama?
 
5090 TDP raised to 600W.

Who wants to bet that some OEMs will ship 450W cables in some of their boxes? Or even some 600W cables will get burned when using Furmark on these GPUs? Can Nvidia manage to release the most expensive consumer GPU with zero drama?
If that's a concern, then buy your cables from a third party. My power supply company sold replacement cables, so I bought one. I had no issues with the cables that came with my PSU, but this calmed my concerns.
 
5090 TDP raised to 600W.

Who wants to bet that some OEMs will ship 450W cables in some of their boxes? Or even some 600W cables will get burned when using Furmark on these GPUs? Can Nvidia manage to release the most expensive consumer GPU with zero drama?

Have we officially heard the TDP is 600W? All I've seen is leaks saying the cooler design is for 600W. We heard the same 600W cooler leaks ahead of the 4090's launch.
 
Nothing's official till Nvidia announces it. But multiple leaks are pointing at 600W.
Yeah, but how are these leaks different than the 4090 600W leaks? I suspect it might be AIBs and suppliers getting told to design a cooling solution to handle 600W just like what they were told for the 4090.
 
Back
Top