GPU-based mesh deformation

although, we are using Cg instead of GLSL I hope my topic fits in here anyway. We are using OpenGL 2.1 and do not plan to use OpenGL 3.0 for our projects unless absolutely necessary.

I’d like to ask you guys a couple of design questions.

We’ve been working on a mesh deformation tool and got it working on the CPU. Now we’d like to implement the algorithm on the GPU. Unfortunately, we’re not very experienced with Cg, shader programming in general and the usage of OpenGL extensions, just the very basics. So please excuse the lack of technical terms.

Obviously, for mesh deformation we need to change the vertex positions after an interaction. So from what we read on forums we figured we’d go for VBOs. Each vertex should be permanently transformed by adding a displacement vector, i.e. the updated vertex position should be written back to the VBO, so that we can apply another displacement vector next frame.

Some directions on how to get that done optimally would help us out a lot even though this is probably quite easy to realize, but we would also like to introduce new geometry. That will likely restrict us to GF80+, but that’s fine.

The scheme is as following:

  1. For all vertices of a triangle, compute the displacement.

  2. If the displacement distorts / enlarges the triangle by more than a threshold, subdivide the original mesh.

  3. Then, for all vertices, including the newly created ones, compute and apply the displacement vector.

Note, the computation of the displacement vector only requires the local information delivered from each single vertex, so nothing fancy here.
Our Algorithm requires the VBO - or whatever data structure fits best - to grow over time. Is that possible? Or if not: how would having to copy the data each frame into a resized VBO (or other buffer) affect the real-time capability of the program?

The most important issue is performance. We absolutely need this thing to happen in real-time. In case there is a way to speeden things up by allocating more memory, thats fine. We have plenty of memory.

To sum things up: we generally need a plan of action that will result in a way to actually enable us to implement our algorithm on the GPU with the most possible framerates.
Key questions for us are:
-What kind of buffers would we have to use?
-What sort of shaders would we have to use?
-Could you point us in the direction of tutorials and/or other documentation to specific parts that we will have to implement?

I’m really looking forward to hear some recommendations / keywords on how to realize that.


If you want to rewrite the modified vertices back to a VBO, you’ll need transform feedback (available in OpenGL 3.0 or via the EXT_transform_feedback extension, maybe ARB version if there is).

You’ll need at least two VBOs. One from which you are rendering the current frame and one where you want to store the results.
Writing the same VBO which is used for rendering is not

The VBOs cannot change their size in runtime. This would lead to a significant performance hit if it were supported. So the VBOs must be large enough.

For introducing new triangles, you’ll need geometry shaders (available via the EXT_geometry_shader4 extension, maybe ARB version if there is).

As a summary you’ll need:

  • at least two VBOs with sufficient storage
  • transform feedback to send transformed geometry to a VBO
  • vertex and geometry shaders

an alternative to transform feedback (which likely is the optimal way I guess) would be rendering the attributes alone into a rectangle float texture.

then either using this texture for the attributes directly in the vertex-shader, or grabbing them into a PBO, which then is used as VBO for drawing.

this won’t be able to do the subdiv however, just displacement.

regarding the subdiv and geoshader/transform feedback, this should be interesting

Hello everyone!

We have made the first steps to finally implementing the whole thing on the GPU.
Now I am encountering another Problem that I would like you guys to help me out with:

Since we are dealing with a dynamic mesh we also not only need to create new vertices but also new normals. This means that I also need to record the normal-output from our geometry shader. The way I record the vertex position data isnt working because the function

glTransformFeedbackAttribsNV( );

does not take any enum specifically for normals as an attribute.
On the web the only thing I could find were the two functions



which don’t help me since I don’t know how to use these commands in combination with the cG Shading Language, if that is even possible.
Any help would be greatly apprechiated.

nvidia aliases generic attribs with classic attribs such as normals…

in the gpu/vertex program… extension the pairs of matching generic attrib ids and fixed function attribs are listed.

vertex.normal = vertex.attrib[2]

if that doesnt work, you can always directly output generic attribs yourself, after all not using fixed function you are not bound to classic ff-attribs

The glTransformFeedbackAttribsNV() specifies which output attributes from the assembly level vertex shader to write. Your shader can write the value you wish to record to any available output attribute (cg varying) and specify to write that attribute to the buffer. That the thing represents normal does not prevent you from writing it to texture coordinate attribute (e.g. TEXCOORD0) and store that attribute (GL_TEXTURE_COORD_NV).