In a two card setup, GPU1 renders the first frame, how is it determined when GPU2 will render the next frame?
If the cards were rendering a movie and the frames were all pre-determined and the number of frames/sec was fixed, then the whole explanation that GPU1 does the even frames and GPU2 does the odd frames would cover it all. But in PC games the frames per second is usually limited only by the speed of the system and especially of the GPU(s). So, GPU1 does the 1st frame but GPU2 could start at any arbitrary moment in the game to render the current (and next frame) of the scene. If it renders immediately after GPU1 starts or finishes than we aren't getting much benefit. In order to get the best benefit, GPU2's frame should be ready as evenly between GPU1's frames but how is this determined when frame render times within a game varies?
thanks.
The rendering does not occur back to back, they occur simultaneously. Rendered frames are put into VRAM (buffering)as finished goods. After that, there is another process that prints the finished goods from the buffer. Lets skip the parts after buffer, lets assume buffers are alway empty, then the 2 factors which controls the FPS is a) the processing power of the GPUs, and b) the speed which data comes in. Lets assume b in terms of time is shorter than the time it takes 1 frame to be rendered, then the only factor left is a), which is what you want to know.
Think of b) is a deck of cards. GPU1 draws the first and start to do its things. GPU2 does not need to wait for GPU1 to draw the 2nd card off the deck. It requires time to draw a card, which GPU2 needs to wait initially until the first card is drawn. Yet, lets assume the time it takes to render each frame is identical, then GPU1 will finish its process X ms before GPU2 as GPU1 started its process when GPU2 is drawing a card, which X is the time to draw the card. In this case, there are no waiting once the process starts.
Theoretically speaking, it is perfect. In practice, none of the above assumptions are constant. There is an optional pre-render data caching option which is another buffer used to store the deck of cards (input). On the other end, user can choose to use triple buffer or not(double buffer)(output) and vsync or not. Note that the buffer is seating on the first video card, meaning that the finished goods from GPU2 must be sent towards video card1, which is what the bridge is for. It is super fast, but not instant, meaning the frames handled by GPU2 is always a bit slower than GPU1. This is what causes a tiny lag between every 2 frames. Although the lag is tiny, it can be seen easily. The only work around is to even out that lag via a triple buffer (or a deck that can stack up to 3 cards). That by itself doesn't do anything as it all depends on the speed of drawing the card. If the average FPS is 70, then drawing the card at the speed of 60FPS will utilize the buffer, and therefore eliminate the mini lag. Yet, utilizing the buffer means frames will not be displayed immediately.
BTW, nenforcer is right, that is only one of the several methods on how to distribute loads, named Alternate Frame Rendering (AFR). The other commonly used method is Split Fram Rendering (SFR), which divides the frame into 2 equally weighted portion before rendering and sent them to the 2 video cards. The problem of SFR is how to predict the weight and the time needed for the prediction.