This got linked off of another forum I read ...
http://www.youtube.com/watch?v=QlMCToxlt1c
This is a demo video from a company called Unlimited Detail, displaying what they call their "unlimited point cloud" technology, or something like that. They have a bunch of videos on the subject with various demos on Youtube.
Essentially what they are saying is that they have done away with polygon wireframes as a structure for organizing graphical object data, and instead have a massive "point cloud" that represents each visible point of an object (at some resolution), combined with a really fast way of searching the point cloud to get back the pixels to display on screen.
A number of comments on the videos have highlighted what seem to be the obvious questions. For example...
- To what do you apply physical rules to derive transformations? Nothing in the demo videos appears to be moving. You can't transform a set of points in a point cloud without knowing what sort of object they are attached to, and what its properties are. In the current model you deform vertices according to structural force propagation formulas, cleave objects, that sort of thing. In a point cloud you'd presumably have an API to get back the set of visible points for an object, but without some framework underlying those points how would you do deformations?
- There must be a pixel pipeline that processes the raw input scene, because they are showing some (pretty poor) lighting effects. You'd also need to be able to combine pixels according to the weighted average color/light of some set of input pixels, because scalable detail depends on it. When a feature of a certain size shrinks below the size of a pixel it has to combine with the pixels around it to create a display value. Currently most of this stuff is handled by the GPU in the internal processing pipeline. So I assume these guys must anticipate a hardware solution, and perhaps there is already a hardware component. I didn't read that far.
- Then of course there is storage size. Current models store vertex lists, polygons, and textures (at various sizes). Even taking into account hi-resolution textures and a large number of mipmaps it has to be more efficient then storing the ideal color of every visible point in your model at some resolution.
Anyway, just thought this was interesting. As I watched a couple of the videos this morning I was kind of leaning toward a verdict of "marketing BS" or at least "highly aspirational." Would be interested to know what you guys think.
http://www.youtube.com/watch?v=QlMCToxlt1c
This is a demo video from a company called Unlimited Detail, displaying what they call their "unlimited point cloud" technology, or something like that. They have a bunch of videos on the subject with various demos on Youtube.
Essentially what they are saying is that they have done away with polygon wireframes as a structure for organizing graphical object data, and instead have a massive "point cloud" that represents each visible point of an object (at some resolution), combined with a really fast way of searching the point cloud to get back the pixels to display on screen.
A number of comments on the videos have highlighted what seem to be the obvious questions. For example...
- To what do you apply physical rules to derive transformations? Nothing in the demo videos appears to be moving. You can't transform a set of points in a point cloud without knowing what sort of object they are attached to, and what its properties are. In the current model you deform vertices according to structural force propagation formulas, cleave objects, that sort of thing. In a point cloud you'd presumably have an API to get back the set of visible points for an object, but without some framework underlying those points how would you do deformations?
- There must be a pixel pipeline that processes the raw input scene, because they are showing some (pretty poor) lighting effects. You'd also need to be able to combine pixels according to the weighted average color/light of some set of input pixels, because scalable detail depends on it. When a feature of a certain size shrinks below the size of a pixel it has to combine with the pixels around it to create a display value. Currently most of this stuff is handled by the GPU in the internal processing pipeline. So I assume these guys must anticipate a hardware solution, and perhaps there is already a hardware component. I didn't read that far.
- Then of course there is storage size. Current models store vertex lists, polygons, and textures (at various sizes). Even taking into account hi-resolution textures and a large number of mipmaps it has to be more efficient then storing the ideal color of every visible point in your model at some resolution.
Anyway, just thought this was interesting. As I watched a couple of the videos this morning I was kind of leaning toward a verdict of "marketing BS" or at least "highly aspirational." Would be interested to know what you guys think.
