# Quaternions and tangent space

I was just wondering if anyone has tried using quaternions to store the tangent space -> eye space transform for normal mapping. It seems to me like it could save some vertex data space or at the very least reduce the number of interpolators needed.

Any thoughts?

I could see something like this at the vertex level, if you’re really pressed for attributes/memory. But you would need to weight this against the expense of a quat-to-basis conversion per-vertex (since you need the actual basis in the end anyway). If you choose to interpolate the quat instead, then you’ll incur a per-fragment conversion, a hefty price to pay to spare 2 interpolants.

Many lighting techniques perform the necessary change of basis per-vertex, rather than interpolate the basis itself, so the interpolant issue isn’t a big deal in this case.

Did you have a particular example in mind?

Originally posted by Q:
But you would need to weight this against the expense of a quat-to-basis conversion per-vertex (since you need the actual basis in the end anyway).
Why do you need the basis in the end?

Did you have a particular example in mind?
What I had in mind was something like this:
[ul][li] Each vertex stores a quat that takes a direction from tangent space to object space.[] In a vertex program, that quat is concat’d with a quat that takes a direction from object space to eye space (e.g. derived from modelview.invtrans by the CPU)[] This quat is then passed on to a fragment program, which uses the quat directly to transform a normal read from a normal map into eye space for use in lighting.[/ul][/li]I do realize it’s moving a lot of work into the fragment program just to save a couple interpolants. I thought of it more as a novelty than anything else and was just wondering if there was any use for it

Why do you need the basis in the end?
Personally, I feel naked without a basis. Besides, the alternative, q(v)q^-1, is about computationally equal in cost to construct, so I would prefer the version that looks better with my shoes.

Novel approaches generally offer the promise of improved visual quality, or an increase in efficiency (or both if we’re lucky). I’m having trouble seeing either here.

What I had in mind was something like this:
[ul][li] Each vertex stores a quat that takes a direction from tangent space to object space.[] In a vertex program, that quat is concat’d with a quat that takes a direction from object space to eye space (e.g. derived from modelview.invtrans by the CPU)[] This quat is then passed on to a fragment program, which uses the quat directly to transform a normal read from a normal map into eye space for use in lighting.[/ul][/li]
I have actually implemented this. Lineraly interpolating quaternions causes some problems with acceleration (the resulting rotation will not be linearly interpolated) and it is comparatively slow to do quaternion math in the fragment shader. You don’t have to calculate base vectors though, as a quaternion-vector multiplication can be performed directly.

sandwich transform QvQ~ is

vresult = v + 2 * cross( v * Q.w - cross( v, Q.xyz ) ), Q.xyz );

or Q~vQ

vresult = v + 2 * cross( Q.xyz, cross( v, Q.xyz ) - v * Q.w ) );

on CPU, this is 18 mul and 12 add, exactly 2 times the ops needed for a 3x3 matrix multiply

on GFX, this is 6 instr: 4 mad + 2 mul.

cheers