Fast HW (and SW) multi resolution mesh

Multi Resolution Mesh should be easy to implement in hardware.
To display a MRM mesh based on vertex collapsing, this is the method :
Each vertex have a sub-vertex index. Vertices are stored in an array and are sorted in weigh (importance) order, from the most important to the less. If you have 100 vertices in your mesh, and you want to display only 50 vertices, for each vertex pointed by a triangle corner you need to refine his index to have a vertex at a index lower or equal to 50. So you have to set up a small loop that is stopped when index <= 50.

With collapsing, some triangles become null (two or three indices should be equal after the indices refinement). To not fall in this error, you need only to sort triangles by vertex number threshold. So, as the number of vertex is lower, you have to simply stop the triangle display loop when the current triangle is null : if you setup a threshold for each triangle (an integer), you haven’t to test the validity of this, because you can compute the exact number of triangle that are valid for a specific number of vertices.

This algo permit the implementation of all simplification methods based on vertex collapsing, by providing a generic method to refine a mesh.

Actually, an MRM algo need at least to send 3 vertex indices to the graphic board. It’s a penalty in most case, and this bring bad result on very fast GPU. A such extension should be provided trough 1 integer for each vertex. When displaying a mesh in a displaylist or in a display element, you simply specify the number of triangle to display and the associated vertex maximum index.

In most case, MRM permit to multiply the complexity of scenes by 3 or 4, and is very efficient when you use instances.

What do you think of that ?

Gabriel RABHI / Z-OXYDE / France

Looks like this should be some sort of client side extension, a bit like a specialized vertex array, don’t you think? I’m not really intimate with MRMs, just vaguely familiar. I don’t quite get how you suggest to manage null triangles?

Well, null triangles are managed by the client. OGL is internally managing null triangle error, but it’s easy to draw the exact number of triangles that correspond to the desired number of vertices. For this, you have to sort – you, the client, in your mesh database - triangles by validity from the normal number of vertices to 3 vertices, which is the minimal number.

So, on HW like nVidia I think it should be easy to code a small asm function that provide simple vertex index refinement. For this, you have to specify a new array for the vertex refinement : for example, this should be for one vertex the different indices : 100 (become)-> 48 -> 32 -> 14 -> 5 -> (1,2 or 3). So, for this, if we fix the vertices number for the mesh display at 20, the refinement loop will change the index from 100 to 14 for that vertex. If we have a triangle that point on the vertex 48, the triangle should be null if this vertex become the 14th after the refinement. If it’s the case, the triangle display loop will stop before that triangle, but this can be calculated by the client.

What client have to do is the computing of the number of vertices or number of triangles he want to display, starting from the maximal count of the mesh. For this, the simplest way is to compute a ratio from the size of triangles, the distance from the point of view and the fov of the camera.

I don’t know if it’s easy to code on nVidia, but I think this should be the simplest and the most generic way to support MRM on GPU, because it let a high liberty for computation methods.

For the exact method, contact me.

Gabriel RABHI / Z-OXYDE / France