- Apr 24, 2004
- 19
- 0
- 0
Example Image
In recent years I've been blown away at the amount of realism and detail that is being introduced into real-time 3d. Take a look at the example image. The features on the face are remarkably detailed with beautiful lighting (not to mention a stone cold stare). But the profile is all sharp lines and jagged edges. The jaggies can be smoothed out using antialiasing, but why can't the sharp lines be smoothed out? We already have the normals at each vert. Why not fit a third order poly between those vertices? The only real issues are finding those critical edges and filling in the void with something meaningful.
I'm somewhat of a beginner in computer graphics, so maybe there is an obvious reason why this would be difficult that I don't see. First we need to find the critical edges. What I consider critical are the edges which the pixel depth on the edge to the pixel depth off the edge forms a delta above a specified threshold. For instance, in the image the edge on the head in relation to the back wall. I think we would need the projected vertex locations and normals. Maybe also have a flag of wether or not the vertex should be considered in the smoothing process. A depth of which to include in the smoothing (maybe smoothing out all the critical edges in the scene would be very computing intensive). Then we can stretch the already textured polys. I'm thinking this would all be a post process. One problem I would worry about is what happened to the the pixels on the back wall that were covered by a pointed edge. At what point was it discarded? Can I detect the critical edge and save some of the runner up pixels along each edge?
Is this even possible with the vertex shader and pixel shader seperate? Would this be only be possible in directx 10 type hardware? Am I way off and looking at this completely wrong?
I remember a technology used briefly to interpolate a higher poly model from a lower poly model. That seem excessive now. Keep the lower poly count, but smooth out the rough edges, that's my thoughts.
Maybe this is wildly out of place, but it was just something that has been nagging at me for a while and I didn't know of a better place to post this.
I appreciate any input (good or bad
).
Thanks for reading
In recent years I've been blown away at the amount of realism and detail that is being introduced into real-time 3d. Take a look at the example image. The features on the face are remarkably detailed with beautiful lighting (not to mention a stone cold stare). But the profile is all sharp lines and jagged edges. The jaggies can be smoothed out using antialiasing, but why can't the sharp lines be smoothed out? We already have the normals at each vert. Why not fit a third order poly between those vertices? The only real issues are finding those critical edges and filling in the void with something meaningful.
I'm somewhat of a beginner in computer graphics, so maybe there is an obvious reason why this would be difficult that I don't see. First we need to find the critical edges. What I consider critical are the edges which the pixel depth on the edge to the pixel depth off the edge forms a delta above a specified threshold. For instance, in the image the edge on the head in relation to the back wall. I think we would need the projected vertex locations and normals. Maybe also have a flag of wether or not the vertex should be considered in the smoothing process. A depth of which to include in the smoothing (maybe smoothing out all the critical edges in the scene would be very computing intensive). Then we can stretch the already textured polys. I'm thinking this would all be a post process. One problem I would worry about is what happened to the the pixels on the back wall that were covered by a pointed edge. At what point was it discarded? Can I detect the critical edge and save some of the runner up pixels along each edge?
Is this even possible with the vertex shader and pixel shader seperate? Would this be only be possible in directx 10 type hardware? Am I way off and looking at this completely wrong?
I remember a technology used briefly to interpolate a higher poly model from a lower poly model. That seem excessive now. Keep the lower poly count, but smooth out the rough edges, that's my thoughts.
Maybe this is wildly out of place, but it was just something that has been nagging at me for a while and I didn't know of a better place to post this.
I appreciate any input (good or bad
Thanks for reading