- Jan 7, 2012
- 642
- 26
- 101
So, if you've ever looked at a pair of stereoscopic images, you'll notice they usually look almost the same (very slight angle difference)
As far as i know, all stereoscoping rendering on current 3d video cards must render TWICE as many frames for a stereoscopic scene vs a regular scene (please correct me if im wrong).
But i'm wondering if that is really necessary. Since so much of the data on the second image is the same as the first, is there some possible method of extrapolating most of the second image from the first, and rendering signficantly less for the second scene, allowing the video card to save a lot of its rendering juice?
Couple of examples
1) sterescopy fails to produce depth data once the distance from the viewer is a certain factor greater then the distance between his eyes. The only reason you can tell one mountain is closer than another is because one of them is hazier than the other. Is there a way to make the video card not bother rerendering data that is a certain distance from the in-game camera?
2) most objects that are closer to the camera will have almost the same shape and textures on them, except rotated in one direction. Could the texture on that object simply be offset to one side and then that little sliver that appears on the other side be rendered, instead of the whole thing?
Is there any research into this?
As far as i know, all stereoscoping rendering on current 3d video cards must render TWICE as many frames for a stereoscopic scene vs a regular scene (please correct me if im wrong).
But i'm wondering if that is really necessary. Since so much of the data on the second image is the same as the first, is there some possible method of extrapolating most of the second image from the first, and rendering signficantly less for the second scene, allowing the video card to save a lot of its rendering juice?
Couple of examples
1) sterescopy fails to produce depth data once the distance from the viewer is a certain factor greater then the distance between his eyes. The only reason you can tell one mountain is closer than another is because one of them is hazier than the other. Is there a way to make the video card not bother rerendering data that is a certain distance from the in-game camera?
2) most objects that are closer to the camera will have almost the same shape and textures on them, except rotated in one direction. Could the texture on that object simply be offset to one side and then that little sliver that appears on the other side be rendered, instead of the whole thing?
Is there any research into this?