• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

ATI Catalyst 3.7 Triple Buffering

Fant

Senior member
There is now a new control on the opengl compatibility settings to enable triple buffering. This comes disabled by default. Why would I not want it enabled and where will it help?
 
Triple buffering will lower performance, but can improve image quality.

In nVidia's ideals, you'll never need triple buffering, ATi do offer it though (as you have found).

You can try it on and if performance is still OK, and you notice image quality improvement, keep it on, but if performance drops, take it off.

Basically it should just improve image quality.
 
I don't quite understand it myself but I've been told that triple-buffering can significantly increase your fps if you're running with vsync on if it drops below your max fps.
 
Triple buffering will lower performance, but can improve image quality.
That's totally wrong. Triple buffering increases performance but can cause compatibility problems. It has no effect on image quality. OpenGL doesn't support triple buffering but apparently, ATI figured out a way to force it.

The "buffers" being referred to are final destination bitmap for 2d/3d rendered images. So at least one such bitmap is needed because it is used by the RAMDAC to communicate the pixels on the physical monitor in realtime. While this bitmap is being displayed, a second bitmap exists which is being constructed by the card. If the card can render frame 2 before frame 1 completes display, then bitmap 2 can be swapped for bitmap 1 during vertical blank. But if the card can't quite finish rendering bitmap 2 before bitmap 1 is done, then bitmap 1 must be be re-used by the RAMDAC becuase it must display something on the screen. After the card finishes bitmap 2, it will just sit on it's ass not doing anything because there are no more buffers to render on (bitmap 1 is in use by the ramdac and bitmap 2 is already done.). This is what triple buffering is for. It provides one more buffer to improve upon situations like this. Without triple buffering, opengl can only attain frame speeds that are intrgral fractions of your monitor refresh rate. (but all of this only holds if you use veritcal sync. If you don't use vertical sync, then triple buffering doesn't do anything useful.)

Directx has always supported any level of buffering (double, triple, 100x even) but it's up to your game to select the level of buffering to use. Come to think of it, it seems like a some games are poorly written and don't use triple buffering on directx.

By the way, triple buffering is something that's existed since the 2d days. It's not an exclusively 3d thing.
 
Wouldnt it make sense then to always use triple buffering since it would only get used when there are extra cycles for the video card? How can it slow down performance? From what i can tell, extra buffers are always good so it keeps the card moving...
 
With TB enabled/vsync off, it results in a 0.1 fps increase in my Enemy Territory benches. I'll take it....
 
Triple buffering can smooth out the frame rate. Take for example, you look at a section of the level, and your card can render it very fast. Then something happens and your card has more trouble with it. The buffer in the back is already rendered, so you don't take such an immediate or recognizeable performance hit. A reason you wouldn't want to use triple buffering is because it takes up that much more video memory. Take your resolution and bit rate and multiply them together: 1024*768*4 ( 4 bytes = 32 bits per pixel ) = 3,145,728 bytes or 3 MB. Multiply that by how many buffers you have and now you have that much less video memory for textures. Triple buffering would be eating up 9 megabytes just to display the image. Quad buffering, at 2048x1536x32(bits) is a crazy 12 MB per buffer times 4 total buffers equals ~50 megabytes! Yowza! So, if you want to conserve video memory because you have an older video card, don't turn on triple buffering. Also, when you turn on triple buffering, you may notice a little more lag in your input and movement. This is because the frame you are seeing is 3 frames old. The one you are currently moving in is in the 3 buffer which won't be displayed for another 2 frames. If you want clarification on this, just ask.
 
Originally posted by: jerm007
Triple buffering can smooth out the frame rate. Take for example, you look at a section of the level, and your card can render it very fast. Then something happens and your card has more trouble with it. The buffer in the back is already rendered, so you don't take such an immediate or recognizeable performance hit. A reason you wouldn't want to use triple buffering is because it takes up that much more video memory. Take your resolution and bit rate and multiply them together: 1024*768*4 ( 4 bytes = 32 bits per pixel ) = 3,145,728 bytes or 3 MB. Multiply that by how many buffers you have and now you have that much less video memory for textures. Triple buffering would be eating up 9 megabytes just to display the image. Quad buffering, at 2048x1536x32(bits) is a crazy 12 MB per buffer times 4 total buffers equals ~50 megabytes! Yowza! So, if you want to conserve video memory because you have an older video card, don't turn on triple buffering. Also, when you turn on triple buffering, you may notice a little more lag in your input and movement. This is because the frame you are seeing is 3 frames old. The one you are currently moving in is in the 3 buffer which won't be displayed for another 2 frames. If you want clarification on this, just ask.

Thanks for the explanation. So how many frame buffers are there usually? Two or One?
 
Triple buffering will lower performance, but can improve image quality
Uh, no. Triple buffering has no bearing on image quality and can improve performance if you also use vsync, though it does increase input lag.

Why would I not want it enabled and where will it help?
If you use vsync then you could turn it on for better performance.
 
waht should the force Z-buffer depth be set at? I would assume 24?
Correct.

What are its benefits over 16?
You'll experience less Z/depth errors. Examples of this are objects that overlap each other in the wrong order and/or sawblade effects on join lines.
 
What are its benefits over 16?
You'll experience less Z/depth errors. Examples of this are objects that overlap each other in the wrong order and/or sawblade effects on join lines.[/quote]

Disadvantage is that there's more video memory used, possibly resulting in lower FPS.
 
Disadvantage is that there's more video memory used, possibly resulting in lower FPS.
That's really not an issue these days. In fact most cards now run faster under 32 bit mode than they do under 16 bit mode and that's with the full colour, not just the depth.
 
ClickNext,
There are usually 2 frame buffers. Those would be the current viewable frame buffer, which is what you are seeing on the screen, and the other is the back buffer that the video card draws to. When adding more buffers via triple buffering, there are more back buffers. The number goes from 1 back buffer to however many you add, with triple buffering having 2 back buffers and the visible frame buffer.

Most new video cards do just as well in 16 bit mode as they do in 32 bit mode, performance wise. The 24 bit z-buffer will result in better image "quality." Quality being that there is no, or less, flickering between objects that are close together. If you would like more explanation of what exactly the z-buffer is and does, I would be happy to explain it.
 
Back
Top