Hooray for patience, I typed all this up last night but I guess it never sent. Lets try it all over again. Unfortunately, it might be slightly more informal since this is the second time I have typed this all out.
(81.59Fps)^-1 = .01226 second per frame = 12.26ms per frame, which is roughly 3 ms faster than the average processing time than 580.
This make sense as the framerate of 2x6850 CF is higher than a single 580, meaning the average processing time of 6850s is shorter than 580.
Side note:
The time needed to generate a frame and framerate have nothing to do with how fast the monitor can display it. The time from signal arrived to monitor to pixels being populated on the screen is called response time. High end monitor has 2ms response time, which is independent to the time required to generate a frame.
Remember that there is a difference between average processing time and average framerate. The entire point of my above post is that while the average framerate of 6850s is faster than a 580, the average processing time of a 580 is a decent amount shorter than 6850s.
This is why SLI setups never make the pro gaming leagues. They are at a constant disadvantage because they are almost always x-1 frames behind at 100% scaling. I can create insanely specific situations where a Xfire/SLI 6850s solution surpasses a 580 in input delay, but they never happen in real-world decent engines such as anything ID Tech. The crappy Gamebryo (Fallout 3) engine
MIGHT be able to reduce input delay by over 100% with two cards, but since the engine itself takes so long to render an image, it is still well behind an ID Tech engine regarding input delay on a single card.
Your side note is absolutely 100% true: Pixel response time and monitor input delay are completely separated from rendering times. My responses on the matter are below:
Madcatatlas said:
Seriously? No matter how you try to hide it. You just said 1 card "gets the frames to your monitor" ca 9.0ms FASTER than two cards.
That is insane if true.
People are looking for LOW ms monitors and then they get shafted like what you are suggesting here, when going from one card to two?
bogus or true?
It is indeed true and I would much prefer a single 580 to two 6850s, but below will explain why that extra 9ms isn't QUITE as bad as it sounds.
Monitor response time is the time it takes for a single individual pixel to change color states to a new value completely. It is generally measured using grey to less grey transitions because these transitions take much much much less time than red to brown changes. When people look for "low MS" monitors as you put it, they are looking for monitor with low pixel response times. In addition to creating more input delay, pixel response time creates a much more serious problem:
THIS VIDEO Anandtech is down right now, so no video
shows how the pixels take time to change. The example monitor refreshes in quite the same way as old CRTs, line by line. It is quite obvious in this video that once the
Line of Changing™ tells the pixels to change values, nothing happens instantaneously. We still have to wait for the pixel response time for a correct color value. The longer a group of pixel takes to change values, the less sharp a singular moving image appears. Back in the day when LCD technology was new, it was common for a pixel to take many stationary frames to complete a color change. When the image moved, it created an effect called ghosting.
It normally isn't that big of a deal when the pixel response time is less than the vertical refresh rate. But since we are used to over-correcting failures, most gamers use pixel response time as a metric when comparing two different monitors. In much the same way as the Pentium 4 reinforced the idea of
Instructions
Per
Clock in the CPU forum, early LCD panels made gamers look for the lowest response time available. There is another criteria regarding delay that I value much more importantly than a 1ms pixel response time vs 5ms.
Monitor input delay is a sum of all image processing and pixel response times. Every frame, a monitor needs to correctly determine gamma, contrast, brightness, and overdrive values for every single pixel on screen. Some monitors such as the BenQ XL2410T
(fastest monitor that I can think of) do this extremely fast. Others take quite a bit of time regardless of their pixel response time. Since manufactures don't want to have a bullet point containing "Our monitors only suck THIS much", it is very advisable to read third party reviews regarding input delay testing. That BenQ monitor I mentioned needs about 8ms to do its image processing. As far as I know, this is the fastest monitor in the world...
In addition to monitor processing time, there is GPU processing, CPU processing, transmission delays, and I/O Delays. That extra 9ms from Xfire isn't so extreme considering everything else.
In the professional gaming world, any delay is the devil. 9ms in total might not help that much, but 9ms from GPU processing time, 3ms from CPU time, 7ms from mouse time, 10ms monitor time, and 5ms lowered video settings, ect. ect., all add up. If you play to win, a single video card is the only way to go. If you want to admire the virtual scenery, multiple cheaper cards isn't a bad option.
Once again make your own conclusions. I'll fix this post tomorrow when I can think a little more clearly. There is a lot omitted from what I typed yesterday. Plus the formatting sucked, I tried to go really fast before the thread got forgotten. MUAH, and any questions you have, I'll address later. Ask away.
Make sure to re-read this post in 24 hours. I'll make it a lot better I promise.