Vertex animation logistics and culling problems

Hi, I’m new here but not new to OpenGL in general. I’m writing a game engine very slowly and very carefully, and am in the planning stages for adding in vertex animation (morphing).

I ran into a load of issues while thinking about how, step by step, the whole system would work:

  1. Passing large amounts of input (such as keyframe data) into a vertex shader is not easy. The best case is to pack it into a texture and manipulate it in a fragment shader perhaps? Manipulations would involve computing the current pose for each mesh from a set of keyframes.

  2. If you do use a texture, how do you get back the results to CPU in case you need them? Maybe a VBO would be good for this, but maybe there’s a better solution?

  3. Vertex shaders attach after frustum culling. This makes sense, but I need to attach an animation shader before culling. Otherwise, a mesh whose animation involves growing and shrinking would disappear once the original mesh leaves the frustum. I’ve heard complaints about this when people morph a flat plane of quads into a terrainmap. The terrain disappears as soon as the original plane disappears. I have personally not witnessed this, so maybe I’m wrong. OpenGL does not allow disabling frustum culling, and I would loathe to disable such a common-sense feature just because I want to animate some stuff.

  4. What is the best way of identifying the vertex and mesh inside of a vertex shader? Say I receive a vertex in the shader, and its coordinates are (34.0, 2.1002, 3.455). How do I know what to map it to? I suppose that I can hash the coordinates into some value and compare the hash against a list (stored in a texture) to find which mesh it belongs to and the id of the vertex in that mesh.

  5. Assuming the identification system in (4), what do I do with culling? In the event that I cannot avoid frustum culling in some way, I’ll be receiving generated vertices that came from my mesh’s edges being clipped against the frustum. These new vertices don’t occur in the original mesh, and therefore cannot be identified easily or animated.

Maybe I’m just an idiot and none of these issues are really issues, but darn it, I want my GPU-accelerated mesh animation…

Let’s assume that the two meshes you are morphing between have the same number of vertices and the same structure, i.e., vertex n of mesh 1 corresponds to vertex n of mesh 2 for every n. (If this is not the case, then you have a bigger problem to solve.)

  1. Probably the easiest solution is to have the vertices for mesh 1 be vertices, and the vertices for mesh 2 to be vertex attributes for the vertices in mesh 1. In your vertex shader, the first thing you’d do is interpolate between the vertex and its attribute, and then apply your transformation matrices and so on, because at that point you’ve got a simple vertex to work with from then on.

  2. n/a. Actually, if you retain a copy on the CPU of all the data sent to the GPU, then you simply apply the same transform on the CPU that you’ve programmed your shaders to do on the GPU.

  3. Frustum culling is not part of the OpenGL graphics pipeline; it is what ever you program it to be, and wherever you program it to be, if at all. To implement frustum culling on morphed geometry, you need to perform it after morphing. On the GPU, you can do that in the Tessellation Control shader, or in the geometry shader.

  4. Use gl_VertexID if it’s defined. But, refer back to response (1) for the better solution.

  5. You don’t have to do anything out of the ordinary to make culling work properly; refer back to response (3). Incidentally, culling is NOT clipping. Culling does NOT clip – culling discards entire triangles or not, but it does not clip.

Incidentally, you can also easily animate/morph meshes in the Tessellation Control shader. In fact, if you will be using displacement mapping, you pretty much need to use the Tessellation shaders to implement that. It’s incredibly easy to animate/morph displacement maps in the Tessellation Evaluation shader.

First of all, why you would like to implement animation with morphing? Skeletal animation (with vertex blending) is more flexible and faster in most situations, and actually that’s what all current game engines use.

To answer your questions:

  1. If you want to go with morphing, you have two sets of vertices that you want to interpolate. You can either do it on the fly in your vertex shader or do it using transform feedback to capture the interpolated vertex data.
    If you want to go with skeletal animation, you only have to have to bone transformations passed to your shaders and calculate the weighted vertex data on the fly or use transform feedback if you want to reuse the data (maybe can speed up things sometimes if you have to render the animated mesh several times).

  2. Solved by using transform feedback if you want to go in this direction (depends on the hardware you target as TF is supported only on GL 3.x capable cards).

  3. No, vertex shaders are not executed after frustum culling, in fact frustum culling happens far after that, after geometry shaders. Even the camera transform or other vertex transforms are implemented in vertex shaders (or later stages). So actually this should not be a problem.

  4. Your idea is definitely not how it should be done. If I understand you correctly, you need some way to identify which mesh are you currently rendering. The easiest way to do is by using program uniforms but a lot of other possibilities exist (much simpler ones than that you mentioned).

  5. Again, vertex shader happens before clipping against the view frustum. If you are worried about how you capture your interpolated mesh back in a buffer object you shouldn’t worry as even transform feedback happens before the frustum culling.

I think you don’t really understand the way how OpenGL works and probably also how animation works.
I suggest you to read a bit more on the topic otherwise you’ll have a hard time getting animation up and running.

Haha, aswered at the same time with almost the same info :smiley:

Yeah, I know! We had almost identical responses to question 3, and then I revised my response before I even saw yours. The question was a little confusing because LadleOfJustice confused frustum culling with clipping a bit. Your response is correct for the frustum culling as clipping interpretation (as well as for back facing culling), but I think that wasn’t what Ladle really meant.

Actually, I’ve just given even my revised answer to question 3 some more thought. If you apply frustum culling after morphing, then really all you are doing is frustum culling of individual triangles. That probably costs more in overhead than is saved by just letting the GPU do it in hardware in the normal place within the graphics pipeline where it already occurs (if enabled).

To make frustum culling be cost-effective, you should do it on a group of geometry rather than on individual triangles. So, to do that, you need something simple, like the smallest sphere or axis-aligned bounding box that just encloses the entire object. That is very easy to morph, as well. Given mesh 1 (and its AABB or sphere), and mesh 2 (and its AABB or sphere), and the interpolation factor, interpolate the two AABBs or spheres, and then compare that to the viewing frustum. If it is entirely outside the frustum, don’t bother morphing or rendering the meshes, but if any of it may be inside the frustum then go ahead with the whole thing.

Yes, your post was a bit more clear regarding to clipping and other post-transform operations. Anyway, I also think that maybe we don’t understand what he wanted to ask with question 3.

LadleOfJustice: Can you clarify your questions a bit? Maybe some pseudo code or something would also come handy to present how you imagined your mesh animation system.

Thanks guys that really helped. I didn’t even know that tesselation shaders exist, and I’ve never quite heard of geometry shaders. I learned most of my shader knowledge in class and from the OpenGL red and blue books. Neither source really even mentioned such things.

So, hopefully I understood this correctly, you (david) said that culling is not defined by the OpenGL pipeline, meaning I have to talk to the GPU some other way to tell it when to cull. Since I assume culling happens by default somewhere, how would I even find out when it happens? Also, do culling and clipping occur completely separately and possibly at separate times? I wish there was some diagram that showed the full process, not just the OpenGL-controllable portion of the GPU pipeline.

In response to a few other things:

I heard there was some speed issue on certain video cards in changing uniforms (I guess they’re supposed to be uniform =D ), so I left that out of my post. Maybe I’m wrong. Heck, I’ve written BSP compilers, Boolean brush systems, and collision detection from scratch but I still can’t wrap my mind around the finer points of shaders.

Also, both skeletal and vertex shading are used in games. Some things are hard to do with skeletal animation (exploding barrels, rippling cloth/water), and skeletal animation is harder to implement right off the bat (in my opinion at least). I’d need a skeletal editor or at least another file format loader. I’m using Lightwave as my modeler and am writing all of the file loaders myself straight from the specs. With vertex animation I can just export a bunch of models in different poses and import them as animations using the same file loaders.

Also, I agree that I’m not very knowledgeable about shaders in general (although honestly I have researched them somewhat and have written a couple of basic ones). Do you recommend any reading on the subject?

Thanks again for your help guys.


Oh, and @david: You mentioned loading the target into a vertex parameter. Are there just arbitrary parameters you can give to vertices, or am I effectively having to cheat here and knock out my vertex normal or color? I can afford to ditch the color, but what if I need to have multiple morph targets (when blending between two different animations smoothly, etc)?


aqnuep, Here’s a little bit of clarification on point 3. I was searching online to see what other people said about using vertex shaders to animate meshes and I came upon this. The site I got it from does not appear to be up at the moment:

“Another issue to be very careful of is the
interaction between the basic geometry, the vertex
shader and any view frustum culling. Vertex shaders
can, and often do, shift the basic vertex of the
geometry. However, the view frustum culling has to
work with the raw data from before the vertex shader
has messed with it. For example, you start with a flat
grid of points and use the vertex shader to generate a
fractal landscape. The implicit bounding box used by
the input geometry is almost a flat plane. While that
implicit bounding box stays within the view frustum,
you’ll see everything correctly. However, the moment
that implicit bounds is entirely out of the view
frustum, the whole lot will be removed from further
processing, and the shader code will never get
executed. That is, your fractal terrain suddenly just
disappears for no apparent reason. To prevent this
from happening, make sure that you set explicit bounds
for your geometry that would represent the maximal
extents that the vertex shader is likely to perturb
the vertices to. In this way, nothing gets culled from
view while any of that volume intersects with the view

maybe I misunderstood this somehow to be default behavior. Maybe the quoted text was in reference to a custom culling operation (it does mention bounding boxes). I didn’t even know you could modify the culling stage so I hadn’t considered that when I read it.

The text you quoted does not talk about the behavior of OpenGL rather it tells about if you have some bounding volume for your object on the application side. That bounding volume in fact has to take into account the possible modifications made by vertex shaders but OpenGL always performs clipping after vertex shaders so if you don’t perform any per-object view frustum culling in your engine code (non-OpenGL related) or you take the animation into account then you should not have any problems and you definitely not get culled geometry from OpenGL’s side.

You can use conventional vertex attributes (color0, color1, normal, fog color or whatever) or you can use generic vertex attributes (glVertexAttrib*).

That’s true. I didn’t want to make you change your mind and go with skeletal animation, obviously morphing is also very useful in certain situations. Its main drawback is the amount of geometric data that you have to fit in GPU memory (consider just 10 meshes with 50 key frames with 10000 tris = 5000000 vertices = usually minimum 80MB of VRAM). It has also its bandwidth cost.

Actually, there are several related issues here. There’s back-face culling, which is defined by the OpenGL pipeline. Back-face culling has nothing to do with the viewing frustum; it depends on whether a triangle is facing away from the viewpoint or towards it, which is determined by the order that the triangle edges occur (i.e., clockwise or counterclockwise). It occurs after the geometry shader. Either the entire triangle is kept or it is discarded. There’s clipping, which is also part of the OpenGL pipeline, and occurs after back face culling. That may remove some, all, or none of a triangle, and it does depend on the viewing frustum (because the viewing frustum is mapped to the clipping volume). Frustum culling is an entirely different animal, though. It is not part of the OpenGL pipeline. That means you have to explicitly implement it. Traditionally, that would occur on the CPU, where you determine whether an entire object is completely outside the viewing frustum or not. If it is entirely outside, then there is obviously no need to process any part of it any further. If any part is inside, then the entire object is rendered. This is implemented efficiently by defining a box that encloses the object, and just the corners of the box need to be tested against the viewing frustum. (This can also be done within the Tessellation Control shader for patch primitives.)

I share your wish for a nice diagram or flowchart of the entire OpenGL pipeline. Though I haven’t found one, the OpenGL Specification does describe the pipeline, though the information isn’t all collected in one nice coherent description.

Tessellation shaders first became part of OpenGL with version 4.0. To date, no book (other than the OpenGL Specification itself, which is available for free on this website) has been published about OpenGL 4 (it was released Spring, 2010). Geometry shaders have been around a bit longer, but they’re still fairly new.

Also, I agree that I’m not very knowledgeable about shaders in general (although honestly I have researched them somewhat and have written a couple of basic ones). Do you recommend any reading on the subject?
I wish I could recommend something accessible. Unfortunately, there isn’t such a document yet (for all five programmable shaders now available). However, the OpenGL Specification is essential. It is not the easiest thing to read, but it is complete, and it is authoritative. The Spec doesn’t reveal its secrets easily; it’s much better as a reference than as a how-to guide. But, with some work deciphering it, and some googling for examples and some questions asked here and some experimentation, you can get the information you need on the newest aspects of OpenGL.

I’d like to add that GL does not frustum culling at all, unless you program this into it. It does frustum clipping.

Actually, this is an interesting idea. Is there a way to reject a batch of vertices in the vertex shader? The usual approach to hw frustum cull I know of uses transform feedback.

Is there a way to reject a batch of vertices in the vertex shader?


Not in the vertex shader, but in geometry shader you can do it.

Not in the vertex shader, but in geometry shader you can do it. [/QUOTE]
Geometry shader doesn’t run on a batch (draw call). It runs on a primitive (e.g. quad, triangle, etc.) …but maybe you weren’t being that specific with the term.

Not in the vertex shader, but in geometry shader you can do it. [/QUOTE]
Geometry shader doesn’t run on a batch (draw call). It runs on a primitive (e.g. quad, triangle, etc.) …but maybe you weren’t being that specific with the term. [/QUOTE]
You’re right, I meant primitives not batch as a draw call but I think ugluk wanted to refer to individual primitives.
Besides that, geometry shader can really cull also batches as draw calls in a separate preprocessing step.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.