I’m trying to implement the motionblur technique described in GpuGems 3.
The idea is simple:

Transform a pixel from viewportspace back to worldspace

Transform the worldspace-position with the viewprojection-matrix of the previous frame back to viewportspace
Compute the distance between the pixels and interpret it as velocity.

My first testimplementation transforms the pixels as described, but uses
the current viewprojection-matrix instead of the previous one.
In theory, all velocities should be zero or very close to zero.
However the opposite is the case. The attached screenshots show the velocity
as added red/green colors. During all screens the viewer was not moving.

01 , 02 , 03
(The coloring is highly viewdirection dependend)

The problem seems to be the lack of precision in the inverted
transformation-matrices. Inverting the projectionmatrix alone works fine:

The problem seems to be somewhere in my quaternion class or in the conversion
quaternion -> matrix (both use floats).
However, fixing the “issues” there won’t make this technique any less fragile. Any ideas how this can be solved?
After spending ~4 hours on this problem, I get strong doubts that this
technique would work in real world applications with large translations in the
view-matrix.

Inverting (projection*view) is almost always the wrong thing to do. Invert them seperately:

inv(proj*view) = inv(view)*inv(proj)

Computing inv(view) only involves a transpose and a multiplication of the negative translation by the
transposed rotation (which is a length preserving
transformation and therefore numerically stable):

y = R*x + t
x = inv® * (y - t)
= R’*y - (R’*t)

where R’ is transpose® (which is equal to inv®)

So R’ is the upper left 3x3 rotation block in the inverse
model view, and -R’*t is the 3x1 translation block.

Computing inv(view) only involves a transpose and a multiplication of the negative translation by the
transposed rotation (which is a length preserving
transformation and therefore numerically stable):
I tried that out, unfortunatly with very little effect.

I made some additional screens:

(velocity * 0.2), vel.x is added to red, vel.y to green (new invert-method):

Display of the length of the velocity vector, the white box marks (0,0,0) in worldspace.

Length of the velocity vector when facing a flat wall

Maybe I’m doing something wrong in the shader, but I’m pretty sure that
my math is correct:

void main()
{
const vec2 rcp_viewsize = vec2(1.0/VIEWPORT_WIDTH, 1.0/VIEWPORT_HEIGHT);
float depth = texture2D(u_fbdepth, gl_TexCoord[0].xy).x;
// get ndc coords of this fragment
vec4 pos_ndc = vec4(gl_FragCoord.xy, depth, 1.0);
pos_ndc.xyz *= vec3(rcp_viewsize * 2.0, 2.0); // scale
pos_ndc.xyz -= vec3(1.0, 1.0, 1.0); // bias
// transform to worldspace
vec4 pos_wrld = u_ndc_to_world * pos_ndc;
pos_wrld.xyz /= pos_wrld.w;
// transform back to ndc with previous matrix
vec4 last_pos_ndc = u_last_world_to_ndc * pos_wrld;
last_pos_ndc.xyz /= last_pos_ndc.w;
vec2 velocity = (pos_ndc.xy - last_pos_ndc.xy) * u_scale; // scale is 1 by default
vec3 color = texture2D(u_fbcolor, gl_TexCoord[0].xy).xyz;
// debug
velocity *= 0.2;
color += vec3(abs(velocity), 0.0);
gl_FragColor = vec4(color, 1.0);
}

What are your near and far plane? I had a very similar problem at work recently and the source of the problem was that the near plane was too close. Unfortunately, I can’t really move it. I solved it by reversing the matrix though. Instead of using [0…1] range, compute an inverse projection matrix of [1…0] and use that instead. You’ll need GREATER as you depth function instead. This comes at a small performance loss due to worse z-compression, but it’s maybe in the range of 5% overall.

near-distance is 2.0, far lies at 32000 units, 64 unis = 1 meter.
I could push it out abit more, I’ll check that tomorrow.

Instead of using [0…1] range, compute an inverse projection matrix of [1…0] and use that instead.
How would that improve the precision?

If this “cheap” motionblur-technique doesn’t work out then I’ll skip it and
implement full motionblur based on a velocitybuffer.
It would cost alot more renderingtime but it would provide motionblur for everything, not just the viewermovement.

It improves it because of the way floating point numbers work. The regular [0…1] makes most numbers after the perspective division very close to 1.0. By reversing it, you get most numbers close to zero instead, which is where floating point numbers have most of their precision. It’s much more precise to compute say 0.001 / 1000.0 than to compute 999.99 / 1000.0.

He can divide, but also he can not divide at all. BUT if he divides, he must divide all 4 components by the same value, to ensure they all are lying on the same 4D-ray.
This is correct description, I hope ))