Will AMD/Nvidia implement turbo core like technology in the future?

busydude

Diamond Member
Feb 5, 2010
8,793
5
76
I was reading this interesting blog by John(JF-AMD): Bulldozer goes to 11

While reading this, I was wondering if it is possible for either AMD/Nvidia to implement similar technology to improve the performance of its GPU's based on the TDP.

GPU's, similar to the CPU's, are rarely used upto its full potential.

Chart-for-John.jpg



When GPU's are being used for 3D applications.. depending on the load, the core clock could jacked up until it reaches TDP. Does it take significant amount of die space to implement this?

I can see AMD doing this in the future, as it is being implemented, albeit crudely, in the recent 69XX cards and I think that is going to be the next logical step for Nvidia too IMO.

Am I missing something important which makes implementing this type of tech complicated on GPU's?

Opinions?
 

BD231

Lifer
Feb 26, 2001
10,568
138
106
JF is the same guy that will tell you Integer calculations are 90% of what people use a cpu for :rolleyes:. Not your best source for info to say the least, he's a straight up PR machine.

It's simply getting the most out of your chip like overclocker's have been doing forever. Only difference is us overclockers don't have any TDP concerns, we just throw copper at the problem and call it a day. Don't for a second let anyone convince you that had AMD been able to turbo up all cores on their Thuban while staying within TDP specs they wouldn't have.

From a server standpoint it's a Godsend, from an individual user standpoint you could do better yourself.

Turn around a general usage calculation error and label it Turbo ... OYE!!! Problem solving should not be labled anything other than a problem solved. If you want to call something Turbo give us a feature like HT, not fixes for problems you didn't plan for.


Please be aware of the fact that JFAMD is a member here at AnandTech Forums and as such he deserves, and is afforded, every bit of respect and civility that we extend to all members of the community...and that includes refraining from catty member callouts and needless personal attacks.

Please be respectful of your fellow forum colleagues. There is no need to make ad hominem personal attacks.

If you have data, facts, that are relevant to the topic under discussion then please contribute them, for everyone's benefit.

Moderator Idontcare
 
Last edited by a moderator:

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
I think when the GPU is not under enough load to reach the TDP then it can be argued you don't need all its power.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
GPU's do clock up/down dependent on load - not just when switching between 2D and 3D, but I am pretty sure nvidia ones do it on the fly while playing 3D games too.

What "turbo" seems to mean for cpu's is some of the cores can clock higher, but they can't all do it at once. This is mostly an advantage for single threaded apps - which ideally wants one very fast core.

That doesn't happen on gpu's - there is no gpu equivalent of the single threaded app. Everything will always run fastest by using all the cores at once.

As for jacking all the cores up under load - well surely that's just what the max gpu clock speed is. If AMD has a clock speed of X but can "jack it up" a further 10% then it's actual clock speed is 1.1X.
 
Last edited:

catnapper

Member
Jul 19, 2010
45
1
66
I've seen ATITOOL report my HD5770 GPU clock varying between 600 MHz and 850 MHz while in a 3D game (not continuously, but in those steps). So unless something else is going on that I don't know about, ATI/AMD does it too.

That's just the thermal features at work. Cards these days have 2D clocks/voltage's for internet and video, and 3D clocks/voltage's for games or any kind of rendering. It lowers temps and idle power consumption.

This variation between 600 MHz and 850 MHz takes place while I am in the game, not switching from 2D to 3D. ATITOOL is also reporting temps and they stay right around 55C and 56C while in the 3D game (at 90% fan - manual). Just interested in what's really going on, perhaps you are right and it is not the card responding to 3D resource requirements. It doesn't look like it's responding to temps tho'.
 
Last edited:

BD231

Lifer
Feb 26, 2001
10,568
138
106
I've seen ATITOOL report my HD5770 GPU clock varying between 600 MHz and 850 MHz while in a 3D game (not continuously, but in those steps). So unless something else is going on that I don't know about, ATI/AMD does it too.

That's just the thermal features at work. Cards these days have 2D clocks/voltage's for internet and video, and 3D clocks/voltage's for games or any kind of rendering. It lowers temps and idle power consumption.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91

Am I reading this graph incorrectly?

It appears to me they are basically saying there is NO overclocking headroom left in bulldozer because they've ensured they have tapped it all with TurboCore.

What am I missing?
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Am I reading this graph incorrectly?

It appears to me they are basically saying there is NO overclocking headroom left in bulldozer because they've ensured they have tapped it all with TurboCore.

What am I missing?

I think you are misinterpreting what headroom means in this case. Perhaps that graph's definition of headroom is indicative to whatever the chip's TDP is, in other words the headroom left to increase clock speed in a "turbo mode" while still staying within the chip's TDP specs.
 

busydude

Diamond Member
Feb 5, 2010
8,793
5
76
What am I missing?

If you are overclocking.. you are going over specified TDP restrictions of the chip. In this case turbo core is employed to gain max performance without going over the TDP barrier, the Y-axis in that graph refers to TDP.

I guess, JF-AMD's blog mainly focused on Interlagos where overclocking is not considered.

Does that make any sense? This is generally not my cup of tea.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I was reading this interesting blog by John(JF-AMD): Bulldozer goes to 11

While reading this, I was wondering if it is possible for either AMD/Nvidia to implement similar technology to improve the performance of its GPU's based on the TDP.

GPU's, similar to the CPU's, are rarely used upto its full potential.

Chart-for-John.jpg



When GPU's are being used for 3D applications.. depending on the load, the core clock could jacked up until it reaches TDP. Does it take significant amount of die space to implement this?

I can see AMD doing this in the future, as it is being implemented, albeit crudely, in the recent 69XX cards and I think that is going to be the next logical step for Nvidia too IMO.

Am I missing something important which makes implementing this type of tech complicated on GPU's?

Opinions?

I think programs where you'd want to be more fully utilizing the GPU (folding), it's probably damn near maximizing the number of cores at work. Otherwise, when you're playing a game and getting 60 fps but only using half the GPU, why would you want the GPU to overclock itself? The game is already running silky smooth.
 

busydude

Diamond Member
Feb 5, 2010
8,793
5
76
Otherwise, when you're playing a game and getting 60 fps but only using half the GPU, why would you want the GPU to overclock itself? The game is already running silky smooth.

On second though, am I making a flawed argument here? I mean do games(Demanding ones) use the GPU to its maximum potential?

We can design a tradeoff between the FPS and Turbo.. if FPS>refresh rate of the monitor.. then deactivate turbo, if not.. try to raise the clocks to match TDP.
 

JFAMD

Senior member
May 16, 2009
565
0
0
The chart represents headroom to TDP as TVICEMAN pointed out.

In the server world we don't overclock, so that is not part of my expertise.

As a matter of fact, either are graphics, I don't really know much about that stuff, I'm still back on Radeon 4000 stuff in my systems.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
The chart represents headroom to TDP as TVICEMAN pointed out.

In the server world we don't overclock, so that is not part of my expertise.

As a matter of fact, either are graphics, I don't really know much about that stuff, I'm still back on Radeon 4000 stuff in my systems.

John thanks for joining the conversation and confirming tviceman's post. Make sense to me now.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
On second though, am I making a flawed argument here? I mean do games(Demanding ones) use the GPU to its maximum potential?

We can design a tradeoff between the FPS and Turbo.. if FPS>refresh rate of the monitor.. then deactivate turbo, if not.. try to raise the clocks to match TDP.

This is basically what I use Vsync for. I don't care to have my GPU working balls-out at 99% utilization just to generate 130-140fps when my monitor is a 60Hz screen.

I like the fact that my 460 runs at about 80% utilization and 60fps thanks to Vsync, means it runs cooler and thus quieter from the fan profile.

I understand vsync has its drawbacks, those drawbacks just don't effect me in a meaningful way. I actually can't stand the screen tearing that occurs without vsync, that's what drove me to it in the first place.

The point is just that unlike the case with CPUs, with GPUs there is no "hurry up and get to idle" scenario unless you are doing non-gaming stuff like CUDA or DC.