• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

"Point Clouds" and Unlimited Detail

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
This got linked off of another forum I read ...

http://www.youtube.com/watch?v=QlMCToxlt1c

This is a demo video from a company called Unlimited Detail, displaying what they call their "unlimited point cloud" technology, or something like that. They have a bunch of videos on the subject with various demos on Youtube.

Essentially what they are saying is that they have done away with polygon wireframes as a structure for organizing graphical object data, and instead have a massive "point cloud" that represents each visible point of an object (at some resolution), combined with a really fast way of searching the point cloud to get back the pixels to display on screen.

A number of comments on the videos have highlighted what seem to be the obvious questions. For example...

- To what do you apply physical rules to derive transformations? Nothing in the demo videos appears to be moving. You can't transform a set of points in a point cloud without knowing what sort of object they are attached to, and what its properties are. In the current model you deform vertices according to structural force propagation formulas, cleave objects, that sort of thing. In a point cloud you'd presumably have an API to get back the set of visible points for an object, but without some framework underlying those points how would you do deformations?

- There must be a pixel pipeline that processes the raw input scene, because they are showing some (pretty poor) lighting effects. You'd also need to be able to combine pixels according to the weighted average color/light of some set of input pixels, because scalable detail depends on it. When a feature of a certain size shrinks below the size of a pixel it has to combine with the pixels around it to create a display value. Currently most of this stuff is handled by the GPU in the internal processing pipeline. So I assume these guys must anticipate a hardware solution, and perhaps there is already a hardware component. I didn't read that far.

- Then of course there is storage size. Current models store vertex lists, polygons, and textures (at various sizes). Even taking into account hi-resolution textures and a large number of mipmaps it has to be more efficient then storing the ideal color of every visible point in your model at some resolution.

Anyway, just thought this was interesting. As I watched a couple of the videos this morning I was kind of leaning toward a verdict of "marketing BS" or at least "highly aspirational." Would be interested to know what you guys think.
 

Snapster

Diamond Member
Oct 14, 2001
3,916
0
0
Looks like a new clever algorithm which has some potential, but they'll need to do allot better in artwork and deformations than they are currently showing. They did say it was programmer art so...

I would assume given it's supposedly all geometry that deformations and interactions must be possible.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
Yeah, I think it's safe to assume that they have some sort of higher-level structure that ties points into objects and scenes. But the thing about vertices and polygons is that you have a relatively limited number of points on which to apply forces, and the state of everything is else derived from the effects on the wireframe. They would need something similar, I think: like a wireframe that ties the points together, and gives them surfaces to apply forces to.
 

Cogman

Lifer
Sep 19, 2000
10,286
145
106
I'm leaning towards marketing BS. In the video you posted, just count the number of times the narrator said "Unlimited Point cloud data".

My guess is that they're doing something along the lines of selectively rendering detail. In other words, they are selectively choosing how complex of a polygon they are going to render based on how close the camera is to the model. As it gets closer, they include more points, however, don't render the outside polygons. Balancing things out (an old trick, to see what I'm speaking of, look at the game Black and White. It did the same sort of thing)
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
I'm leaning towards marketing BS. In the video you posted, just count the number of times the narrator said "Unlimited Point cloud data".

My guess is that they're doing something along the lines of selectively rendering detail. In other words, they are selectively choosing how complex of a polygon they are going to render based on how close the camera is to the model. As it gets closer, they include more points, however, don't render the outside polygons. Balancing things out (an old trick, to see what I'm speaking of, look at the game Black and White. It did the same sort of thing)

I'm pretty sure he made an argument against this very thing in the video when he was talking about model replacement as you zoom in.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
So, how do these differ from voxels?

If I heard right, they're not partitioning the space, but rather storing a coordinate and ideal color for every point on every visible surface. Points where nothing exists would be transparent, but presumably effected later on by volumetric effects, particle effects, etc.

Although I bet they are using some sort of ordered space partitioning tree for the search part.