3d stereoscopy data redundancy

Discussion in 'Video Cards and Graphics' started by serpretetsky, Nov 11, 2012.

  1. serpretetsky

    serpretetsky Senior member

    Joined:
    Jan 7, 2012
    Messages:
    549
    Likes Received:
    0
    So, if you've ever looked at a pair of stereoscopic images, you'll notice they usually look almost the same (very slight angle difference)

    As far as i know, all stereoscoping rendering on current 3d video cards must render TWICE as many frames for a stereoscopic scene vs a regular scene (please correct me if im wrong).

    But i'm wondering if that is really necessary. Since so much of the data on the second image is the same as the first, is there some possible method of extrapolating most of the second image from the first, and rendering signficantly less for the second scene, allowing the video card to save a lot of its rendering juice?

    Couple of examples
    1) sterescopy fails to produce depth data once the distance from the viewer is a certain factor greater then the distance between his eyes. The only reason you can tell one mountain is closer than another is because one of them is hazier than the other. Is there a way to make the video card not bother rerendering data that is a certain distance from the in-game camera?

    2) most objects that are closer to the camera will have almost the same shape and textures on them, except rotated in one direction. Could the texture on that object simply be offset to one side and then that little sliver that appears on the other side be rendered, instead of the whole thing?

    Is there any research into this?
     
  2. omeds

    omeds Senior member

    Joined:
    Dec 14, 2011
    Messages:
    637
    Likes Received:
    0
    TriDef has a mode called "virtual 3D" that extrapolates a 3D image from a single 2D frame, however it doesnt look anywhere near as good and often objects are not at correct depth.
    I'm guessing its difficult to do, and to do it correctly may take some serious processing, in which case, might aswell just render the two correct frames in the first place.

    I dont know of any "mixed" modes, of combining real 3d for certain objects, and extroplating the rest like in your suggestion, apart from Crysis 2, and the 3D effect is rather poor.
     
  3. serpretetsky

    serpretetsky Senior member

    Joined:
    Jan 7, 2012
    Messages:
    549
    Likes Received:
    0
    Well, i'm not refering to attempting to extrapolate 3d data just from a single frame and nothing else. I believe this is what tv's to, and quite honestly, i have no idea how they even manage to do this since i can't understand how you get depth data from a single frame or even a stream of frames. I've heard they can attempt to use edges, motion blur, focus blur, etc etc, but in the end i still can't fathom how they do it. Maybe its unfathamability is part of the reason most people say it looks like crap, cause it really doesn't work. I don't know.


    Here's what I'm saying:
    The video card has two things to work with:
    1) the single frame that has already been rendered, but is from the other angle
    2) ALL of the depth data for the models being rendered.
     
  4. omeds

    omeds Senior member

    Joined:
    Dec 14, 2011
    Messages:
    637
    Likes Received:
    0
    Yeh I beleive Crysis 2 implementation is something along those lines, however the results are not very good.
     
  5. KingFatty

    KingFatty Diamond Member

    Joined:
    Dec 29, 2010
    Messages:
    3,025
    Likes Received:
    1
    I think the more you approximate, the faker it would look (i.e., instead of a smooth 3D depth to everything, it would look like a series of cardboard cutouts corresponding to various depth "steps").

    As far as the rendered frame vs the depth data, maybe you can think of it as you have a 3D movable camera viewpoint, and you can reposition that camera in the 3D world. So, you would position the camera to get the first frame, and then you move the camera a tiny bit to the side and angle it a bit, then you get the 2nd frame. So I think the 3D depth data is just part of the model/world, and you are just changing the camera angle/position for the different frames. Or, alternating between 2 cameras.
     
  6. serpretetsky

    serpretetsky Senior member

    Joined:
    Jan 7, 2012
    Messages:
    549
    Likes Received:
    0
    Crysis 2 in 3D! (Updated!)
    Published on Wednesday, 20 April 2011 10:59
    Written by Neil Schneider
    By Neil Schneider
    http://www.mtbs3d.com/index.php?option=com_content&view=article&id=12429&Itemid=76

    Thank you for that omeds! I wonder what approximations they use

    yes, as far as i know that's how games (and the drivers) produce the stereoscopic image.
     
  7. SirPauly

    SirPauly Diamond Member

    Joined:
    Apr 28, 2009
    Messages:
    5,187
    Likes Received:
    0
    Indeed --2d+depth! Didn't offer the high depth quality as traditional stereo 3d methods but the performance hit was low and very compatible with the features.