....And then shows a graphic showing the 3070 as the most efficient solution out there with it's simple board and generic RAM...ok lol
How much does PWR_SRC power draw say? I assume that's the memory power.
![]()
It's not uncommon at all for the smaller (or more cut down) dies to be more efficient. It's also not the most efficient solution, that's actually the RX6800.
The point I was making though, is that the performance per watt testing only shows a slight advantage for the 3070. If the board/memory power consumption was such a huge problem, you would think you would see the 3070 having a huge lead in performance per watt, but we don't see that, it has a very small lead which, again, is not uncommon to see the lower tier card being more efficient. So if the 3090 is burning all this power at the board and memory level, above and beyond what is normal for a modern graphics card, why is it so close to the 3070 in efficiency? How much power does the 8 GB of GDDR6 on the 3070 consume? What are its board level losses like?
I haven't owned an NV card since Pascal, but PWR_SRC I believe relates to the power regulating circuitry on the board, not the memory. If someone knows for sure they can chime in.
gpu-z says PWR_SRC is the supplyu to the laptop! (im assuming that it means power regulating circuitry or suchlike as you say)
The important ones are the ones I highlighted on my second screengrab.
Note, Im just showing the results I found on my board with my quick test, I'm not an expert and assume the numbers GPUZ shows are reasonably correct
But these tests barely even fill a quarter/third of the GDDR6 capacity of the 3090, so they are not a good indicator of the powerdraw of 24GB running at full-tilt
I'm thinking they are not, or at least the interpretation is not. If you add up all of the power consumption numbers, it goes way above the reported board power consumption. I'm guessing that the way Ampere sensors report is different than previous gens and GPUz hasn't been properly adjusted yet (partially evidenced by the PWR_SRC description).
I make my own assets and visualizations/games in UE / C4D / blender etc. Unoptimized assets can kill anything - I can bring my 3090 down to 1FPS! LOL - and you can have 3 games on 3 screens for the win!What are you testing with that loads the 3090 VRAM so much more than modern 4K games?
I'm thinking they are not, or at least the interpretation is not. If you add up all of the power consumption numbers, it goes way above the reported board power consumption. I'm guessing that the way Ampere sensors report is different than previous gens and GPUz hasn't been properly adjusted yet (partially evidenced by the PWR_SRC description).
Also, are you overclocking your RAM?
I make my own assets and visualizations/games in UE / C4D / blender etc. Unoptimized assets can kill anything - I can bring my 3090 down to 1FPS! LOL - and you can have 3 games on 3 screens for the win!
I have studied the GPU-z results and I think they make sense when you understand them. I dont think you can add them all up though - because as you say, it goes well above 350W. Some are subtotals I believe, like PWR_SRC.
Maybe, but it quite clearly states total power, GPU power and RAM power, and thats all I was really interested in.The problem is understanding them. If you can't add them all up, then it's hard to say what is actually using power where because if you add just the GPU chip power and what is supposedly memory power, you come well short of the board power. But if you then add the PWR_SRC reading, it's well above board power. So is there overlap between sensors? Should some amount of some sensors be added to others to be equal with earlier generation readings? It'd be nice to see the HWinfo numbers if @JoeRambo is correct that they seem to add up better.
Maybe, but it quite clearly states total power, GPU power and RAM power, and thats all I was really interested in.
I dont think the workload matters so much - the card is rated at 350W max, and I created a workload to push everything to 100%. I reached a max power draw of 350W, and saw the ratio between GPU and RAM powerdraw when operating at max, and clearly the RAM has capacity to draw considerably more juice than the GPU when both are flat-out. (at stock).With such an extreme and unusual workload it's hard to say what the real consequences are without similar tests on other cards, at least to the point they are able to with the smaller VRAM. If you are running blender and more 3D rendering type loads, behavior may be very different than what happens during gaming for each card.
I dont think the workload matters so much - the card is rated at 350W max, and I created a workload to push everything to 100%. I reached a max power draw of 350W, and saw the ratio between GPU and RAM powerdraw when operating at max, and clearly the RAM has capacity to draw considerably more juice than the GPU when both are flat-out. (at stock).
I cant imagine there would be a scanario where I would get higher powerdraw on either the GPU or RAM by reducing the powerdraw of the other component(s) so I think my test is reasonably reliable.
Yes, but is it really what you think it is? That's the question. Are you able to check HWinfo to see if it shows the same results?
Edit: Your GPUz screenshots also show only 6 GB of VRAM being used, well within 4K gaming usage so it's not a VRAM usage issue here. There's something else going on and I'm guessing it has far more to do with the sensors being misinterpreted than with actual power usage.
If the board power and VRAM power was so bad (compared to other modern graphics cards), you would expect that the 3070 with GDDR6 (non-x) and a much simpler PCB and components would be much more efficient than a 3080 or 3090, but that doesn't seem to be the case?
View attachment 36649
I DL this now, im not familiar with this, but I think its showing the same info
I'm not really sure what you're trying to prove. The 3080 only has 10 GB of GDDR6X and isn't using twice as many chips and the 3070 has the clocks pushed a little bit further than the 3080 so you shouldn't expect to see much of a difference between the two.
You'd want to include a 3090 in there since the clocks are the lowest and it has twice the RAM chips as a 3080.
The efficiency on Ampere is only bad because Nvidia did what AMD had been doing for several generations and pushing the cards to the limits of the silicon which makes them guzzle power.
Based on testing from numerous websites and forum users the power draw can be cut dramatically with a very small decrease in clock speed and an accompanying under-volt.
A die shrink makes sense just because adding more CUDA cores doesn't make a lot of sense, but I think they'll want to find ways to better utilize all of those cores and overhaul the RT portions of the architecture to aim for at least doubling the performance again.
Kopite's tweeting about lovelace again, this time its going to show up in a next gen tegra chip? Wonder if lovelace is especially low power or if its just what nvidia has on their roadmap for next gen. doesn't seem like an ampere+ at this rate