Velocity vector transformation in shaders

Ah, yes, okay… I’ve discovered the problem.

For my simple test case (just testing to see if the object was moving forward or backward or whatnot) multiplying the velocity through the ModelViewMatrix OR the NormalMatrix doesn’t make much sense… because both of those take the object’s orientation into account, and if we consider an object can be moving in a single direction while facing in any number of directions (consider a car driving forward toward us, backing up toward us, or even skidding sideways toward us), this approach doesn’t work for what I was doing. The velocity vector shouldn’t necessarily always follow the orientation of the vehicle in this case, particularly if I’m feeding it into the shader and not asking the shader to figure it out, so that was the problem there.

If I take the camera’s view matrix and multiply the velocity vector through THAT however, we get the right result.

Now, to figure out the better approach that takes per vertex / rotation / isn’t limited by my number of uniforms / etc into account… A few good suggestions have been made there…

The only issue I foresee with the idea of the vertex attribs thing is I’m not sure how I could apply that in OpenSceneGraph as it loads / stores objects as Nodes from various file types (.obj, .flt, etc.) I’d have to look into how to actually apply velocities per vertex there… I was originally thinking something more like passing in an axis of rotation and rotational velocity along with the translational velocity and work something out per vertex from there.

This is a follow up to my first post on 10/2/2010.

I had suggested the best way to provide velocity vectors is not to calculate them on the CPU and pass them in to your GLSL program (whether by means of uniforms or vertex attributes), but to calculate them on the GPU itself, in the vertex shader. This is because, in general, the velocity vector of each vertex will change each frame.

Calculating the velocity vector for each vertex on the GPU is very simple: just subtract the position of any given vertex in the previous frame from the position of that same vertex in the current frame, and multiply that vector by the inverse of the time between frames.

This automatically takes into account all motions and dynamic transformations, no matter how complex, of all objects.

Here is the update: I had mentioned that the easiest way to implement this would probably be through something like a VBO (but I wasn’t sure) to store vertices for use across frames. I think you need to use a transform feedback buffer rather than a VBO (or is a TFB inside a VBO? I don’t know). I’ve never studied transform feedback buffers and I don’t know how to use them. I just think that they are the mechanism to let you retain a copy of each transformed vertex from the previous frame with which to calculate the velocity vector in the current frame.

I’m looking into the frame-by-frame calculation in the shader idea right now (scratching something out on paper as we speak… well… okay taking a break from that to type obviously :P)

I’d considered the possibility of handing the shader each object’s previous MVM each frame, but there’s the obvious issue of heavy uniform use though. My GPU can handle 32k vertex uniforms, which would amount to ~2228 mat4s, assuming absolutely no other uniforms at all, which of course won’t be the case.

I’ve never heard of a TFB before. Again I’m using OpenSceneGraph (wrapper over OpenGL), not OpenGL directly, so it’s hard to say precisely what loops I’d have to jump through to implement such a thing. Like I said, I’m not even sure how it actually goes about storing / rendering its Nodes (objects). Display lists or VBOs I’m sure, but it’s all abstracted. I’d have to do a bit of digging.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.