blanketyblank
Golden Member
- Jan 23, 2007
- 1,149
- 0
- 0
After reading the article looks like the first generation is simply "render every Xth frame on the feeble GPU." While guaranteed to increase benchmark scores this will stutter and/or have input lag like nothing seen before or since.
Think of it this way: GPU A can render a frame in 16 milliseconds (running at 60fps). GPU B takes 60ms (running at 15fps). If load balanced 4:1 you'd have to be willing to put up with 120ms of input lag (pre-rendering all 5 frames and then displaying them at a smoothed ~70 fps) not to have awful stuttering.
The grand vision of load sharing, if ever implemented, still holds lots of promise.
True, but I don't really think they would ever really do something like 1:4. More likely in those circumstances you'd just offload some other function to the weaker GPU like some of the tesselation or physics calculations. Otherwise I'd imagine you'd have two cards much closer in performance like say a 5770 and a 5850( bit less than twice as fast) where we'd see a 1:2 and prerendering 3 frames wouldn't be that bad.
