• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

GTX580 taken to 1500MHz core

But can it play Farmville?

Is there a short answer to why GPUs seem to be stuck-in-the-mud as far as the mhz race goes? Power consumption?
 
1519 mhz core you say?

172,000+ 3d mark 2003 score, you say? Im pretty sure it ll play anygame from 2003 well nuff with a score like dat 🙂

to bad it wasnt a air or watercooling, most people cant afford to use Ln2.
Still.... just... insane.
 
But can it play Farmville?

Is there a short answer to why GPUs seem to be stuck-in-the-mud as far as the mhz race goes? Power consumption?

Yes. Why would AMD/nVidia waste their thermal capacity on clock speed when added cores give them way more performance per watt.
 
I think this kind of a test is stupid. I'd rather have seen them back this thing down to 1400Mhz and run a meaningful benchmark. I suppose getting it to boot into Windows is neat for seeing the clock speed potential... BUT we have no idea how it scales with clock speed by using a CPU limited benchmark, such as one as 3DMark03.
 
Is there a short answer to why GPUs seem to be stuck-in-the-mud as far as the mhz race goes? Power consumption?

Yeah, power consumption became pretty key a couple generations ago.

Look at the TDP of a 7900 GTX... super top end card at one time, and it was 86 watts. What card of today has a TDP of 86 watts? A 5750. Barely any discussion of the 5750 here, it's such a "low end" card, but it eats power like the top end card of ~5 years ago. We talk about a 5750 like it's a nice efficient low power consumption card. Largely that's developing good idle power reductions. I imagine todays cards idle the same or lower power than the 7900 GTX of yesteryear.

The lowly GTS 450 is over 100 Watts TDP.

Now look at the high end today. 6970 is "up to" 250W (OC limit on the power monitoring) as is the GTX580, some MFRs remove the power limit, and are pushing closer to 300W through these things.... times have changed, the stock card TDPs are close to the power consumption of THREE of Intel's new 2600k processors at stock speeds. It's no wonder GPU processing is powerful, it's sucking down juice like a 12 core Sandy Bridge would.

When you have people running 250-300W cards in Tri-SLi, then add in an appropriate CPU to drive it, you start approaching the limit of power that can be supplied by a household electrical circuit...you can start measuring the power requirement of the GPUs in horsepower instead of Watts (1 HP is about 750W)... They've done pretty much all they can do by just scaling up power input --> graphical power output.
 
Last edited:
Yeah, power consumption became pretty key a couple generations ago.

Look at the TDP of a 7900 GTX... super top end card at one time, and it was 86 watts. What card of today has a TDP of 86 watts? A 5750. Barely any discussion of the 5750 here, it's such a "low end" card, but it eats power like the top end card of ~5 years ago. We talk about a 5750 like it's a nice efficient low power consumption card. Largely that's developing good idle power reductions. I imagine todays cards idle the same or lower power than the 7900 GTX of yesteryear.

The lowly GTS 450 is over 100 Watts TDP.

Now look at the high end today. 6970 is "up to" 250W (OC limit on the power monitoring) as is the GTX580, some MFRs remove the power limit, and are pushing closer to 300W through these things.... times have changed, the stock card TDPs are close to the power consumption of THREE of Intel's new 2600k processors at stock speeds. It's no wonder GPU processing is powerful, it's sucking down juice like a 12 core Sandy Bridge would.

When you have people running 250-300W cards in Tri-SLi, then add in an appropriate CPU to drive it, you start approaching the limit of power that can be supplied by a household electrical circuit...you can start measuring the power requirement of the GPUs in horsepower instead of Watts (1 HP is about 750W)... They've done pretty much all they can do by just scaling up power input --> graphical power output.


Thanks for the response. I know hardware pretty well, I just wish I was more versed on CPU v GPU. Why can I add 200mhz to my cpu with a .125 Vcore bump, but adding 50mhz to my GPU core can cause the world to end....
 
Thanks for the response. I know hardware pretty well, I just wish I was more versed on CPU v GPU. Why can I add 200mhz to my cpu with a .125 Vcore bump, but adding 50mhz to my GPU core can cause the world to end....

Transistors could be a clue. 1billion for a 2600k and 3billion for a GF110/100.
 
Yeah, power consumption became pretty key a couple generations ago.

Look at the TDP of a 7900 GTX... super top end card at one time, and it was 86 watts. What card of today has a TDP of 86 watts? A 5750. Barely any discussion of the 5750 here, it's such a "low end" card, but it eats power like the top end card of ~5 years ago. We talk about a 5750 like it's a nice efficient low power consumption card. Largely that's developing good idle power reductions. I imagine todays cards idle the same or lower power than the 7900 GTX of yesteryear.

The lowly GTS 450 is over 100 Watts TDP.

Now look at the high end today. 6970 is "up to" 250W (OC limit on the power monitoring) as is the GTX580, some MFRs remove the power limit, and are pushing closer to 300W through these things.... times have changed, the stock card TDPs are close to the power consumption of THREE of Intel's new 2600k processors at stock speeds. It's no wonder GPU processing is powerful, it's sucking down juice like a 12 core Sandy Bridge would.

When you have people running 250-300W cards in Tri-SLi, then add in an appropriate CPU to drive it, you start approaching the limit of power that can be supplied by a household electrical circuit...you can start measuring the power requirement of the GPUs in horsepower instead of Watts (1 HP is about 750W)... They've done pretty much all they can do by just scaling up power input --> graphical power output.


Nice post! Thanks.

I recently upgraded my psu to a 1500w. Raised my eyebrows when the documentation said you may need to upgrade your wiring/circuit in your home to feed the PSU under full load.
 
Comes down to design of the chip. Take a look at a CPU. It has few cores and a lot of extra hardware that isn't always executing at any given point. If there's no branch, the branch predictor doesn't need to do anything. Not every pipeline is constantly full. Sometimes there's a cache miss and things have to stop for a few cycles. Sometimes the whole core gets shut off. Certain parts of the chip will get really hot, but if there's something that doesn't run all of the time it will help to dissipate the heat and spread it out more.

The GPU, however, is packed with hundreds to thousands of execution units. If these guys aren't working in a massively parallel way, the GPU isn't going to get good performance. When all that work is constantly being done it produces a lot of heat as electricity constantly surges through the circuits that make up all of those cuda cores, stream processors, etc. that are tightly packed together. There's not room for the heat to spread out so the cooling system needs to remove it. This means that the clock rate can't be pushed as high as there's no way to remove that much heat quickly enough.
 
Last edited:
Why can I add 200mhz to my cpu with a .125 Vcore bump, but adding 50mhz to my GPU core can cause the world to end....


1) Do you also bump your voltage on your GPU by 0.125 volts?
2) Do you have a gigantic heatpipe tower cooler with a 120mm fan on your GPU?
3) Is your CPU cooler also cooling your multi-phase power conversion FETs and memory?
4) Intel is comfortably ahead of it's competition on the CPU side, they do not need speed increases to compete against AMD and are not pushing their CPUs to the limit. Typically the "trailing" competitor will have less overclocking margin. GPU competition is much, much closer, so you see similar margin on the highest end cards.
 
Last edited:
Thanks for the response. I know hardware pretty well, I just wish I was more versed on CPU v GPU. Why can I add 200mhz to my cpu with a .125 Vcore bump, but adding 50mhz to my GPU core can cause the world to end....

Think in percentages. Also are you running your CPU with an aftermarket cooler and your GPU with a stock cooler?
 
I recently upgraded my psu to a 1500w. Raised my eyebrows when the documentation said you may need to upgrade your wiring/circuit in your home to feed the PSU under full load.

2 horsepower PSU... Your power supply is capable of outputting more power than my lawnmower or chainsaw.

A typical residential circuit is "15 amps" but generally to be loaded continuously no more than 80%, or 12 amps. 1500W is 12.5 amps on 120v. Keep in mind that draw from the wall will be more, due to efficiency issues. This is why the manual suggests potentially upgrading your circuit to a 20A circuit. a 90% efficient 1500W PSU can pull ~14A from a 15A circuit.

You also probably don't want your monitor(s) or anything else on the same circuit.
 
You plug in to a 240v outlet. I don't even know if they make consumer grade equipment that could plug into one, but it would be one way to get around the power issue. Quad-SLI with overclocked 580s would been damned sexy.
 
Back
Top