Interview with Eric Demers

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Eric: Irrespective of what nVidia says, the truth is that CUDA is their proprietary solution, which means that if we were to use it we'd be stuck being second place and following their lead; that's not really a position we want to be in. Another characteristic of CUDA is that it's very G80 centric, so first we'd have to build a G80 in order to have a reason to use CUDA. So, no, we're not going to support/use CUDA in any foreseeable future.

Eric: Micro-stuttering can be caused by multiple things. For example, for our previous product, the ATI Radeon? HD 3870, one of the causes of micro-stuttering was due to the fact that the graphics clock was being increased and decreased too frequently, during games. The ATI Radeon HD 3870 was one of the first AMD parts to introduce a programmable micro-controller to monitor and control the chip power through clocks and voltage. The ATI Radeon HD 3870 was able to detect times when the application was not using it, and reduce its clock speed to conserve power. What we found is that within a single frame, when the CPU load was high, there were times where there was enough "starvation" to cause the ATI Radeon HD 3870 to reduce its clock, even though it was running a game. When the next part of the frame came up, the graphics clock had already been reduced, so that the rendering was slowed down until the chip detected a heavy load and resume high clocks. This up/down on the clock saved power, but reduced overall performance and cause micro-stuttering. This was fixed in February with a driver that taught the chip how to behave in this kind of situation (don't drop the clock in the middle of a 3D app). However, for the ATI Radeon HD 4870, we changed how the micro-controller worked from the beginning, to make it monitor multiple "windows" in the chip (both frequent changes per frame, and long term changes over multiple frames), and take appropriate action. This allowed the ATI Radeon HD 4870 to launch with all the power gating fully enabled and no stuttering due to clock changes.

There are other potential sources for micro stuttering, some of them, for example, having to do with moving memory around, which can cause either blackouts for the CPU or the GPU. Others exist when the CPU and GPU are more unbalanced (fast GPU, slow CPU), for example where the CPU will not generate any frames for a while, then generate 2 frames. It could be that in that case, we get an average time for frame 1 which is the idle time plus render, while frame 2 will be only render. That could lead to 16ms and 1ms frame times, which would appear as stuttering (assuming 15ms idle, 1ms render times). Multi-GPU makes the problem worse, as the GPU consumption rate is even higher. We are investigating these and others, though it's a tall task to fix all of them while also achieving peak performance.

so they ain't gonna fix it? :p
- it's still there in 4870x-3
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,971
126
I would disagree that a slow CPU + fast GPU cause micro-stutter; a slow CPU will generate render times that of CPU render time which will generally be consistent between frames. It's the case of a fast CPU + slow GPU where the render times swing between CPU render time and GPU render time.

Also I liked this question:

R3D: Are you considering using a more extensive/granular SuperAA implementation, for example, 2X AA being done by having the 2 GPUs render the frame with no AA and jittering the frames in the compositing chip, 4X AA has the 2 GPUs render the scene with 2X AA etc. (an accumulation-buffer like approach, if you will)?
I was advocating this about a year ago. ;)

This approach has multiple advantages over traditional AFR or SFR and would truly be an advantage for multi-GPU in just about every situation.
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Eric: Irrespective of what nVidia says, the truth is that CUDA is their proprietary solution, which means that if we were to use it we'd be stuck being second place and following their lead; that's not really a position we want to be in. Another characteristic of CUDA is that it's very G80 centric, so first we'd have to build a G80 in order to have a reason to use CUDA. So, no, we're not going to support/use CUDA in any foreseeable future.

I think this decision may not bode so well for them, in the future, possibly even the near future. We'll see, though.