Quaternion functions for GLSL

I mean Blinn-style bumpmapping.

Possibly obsolete thread but it traverses an area of current (quaternion) interest. My thinking is this. Tangent space normal maps basically store normals as offsets from the z axis (0, 0, 1). The underlying assumption being that if you know the rotation required to map the z axis onto the (interpolated) vertex normal then you can use that rotation to rotate the offset normal in the normal map so it sits relative to the vertex normal (just like it pre-rotation sits relative to the z axis). It is the point of the tbn matrix, which as I understand it is supposed to be orthogonal, to act as a base frame allowing for this rotation.

If this is the case, then we know the z axis vector (0, 0, 1). We also know the vertex normal. We can find the axis that bisects these 2 vectors by cross product and construct a quaternion that describes the rotation that maps the vectors about that bisecting axis:

angleVec=cross(vertexNormal, zAxis);
q.xyz=angleVec.xyz;
q.w=sqrt((vertexNormal.length * vertexNormal.length) +
         (zAxis.length * zAxis.length)) +
         dot(vertexNormal, zAxis);
normalize(q);

If the quaternion is calculated in the vertex shader, and if glsl were able to interpolate quaternons, the quaternion could be used to rotate the normal from the normal map:

rotNorm.xyz=(q * normQuat * inverseQ).xyz;

where

normQuat.xyz=normFromMap.xyz;
normQuat.w=0;

and

inverseQ.xyz=-q.xyz;
inverseQ.w=q.w;
normalize(inverseQ);

Effectively tangent space normals are converted into world space normals on the fly allowing for all the advantages of tangent space normal maps.

Zengar’s take of quitting normal mapping altogether is also interesting. The best rendering I get is with good textures modified with smooth/transparent (Strauss) materials. I was also thinking about Bioshock and how they seem to have something like a basic non lit (by lights) backdrop upon which they place smaller lit detail ( a shiny tile may glitter relative to a light while the floor in which it is embedded is unchanging.

It is the point of the tbn matrix, which as I understand it is supposed to be orthogonal

The tangent space basis matrix is not required to be orthogonal. It describes the orientation of the S and T texture coordinates relative to the vertex positions. Skewing is perfectly legitimate in texture mapping, and the tangent space basis must match this.

The normal will usually be orthogonal to the plane of the tangent and bitangent. But that’s about it.

I know that as a matter of fact tbn matrices end up not being orthogonal and being used as they are but I have understood this as “doing the best with a broken tool”. The reason for this thinking is that the tbn matrix functions to provide a correctly oriented orthogonal base frame used to rotate vectors from one space to another; base frames if they are to be used for rotations must be orthogonal (otherwise you get skewing/scaling, etc), as far as I understand. Or am I totally misunderstanding the function of the tbn matrix in which case can someone please explain what they do.

The problem I am grappling with is if you bisect the zAxis vector and the vertexNormal vector, calculate the angle between the 2 vectors about this bisection and then apply the same rotation about the same bisected vector but this time to the normal in the normal map(rather than the zAxis the mapped normal lies relative to), will the mapped normal be rotated relative to the vertexNormal akin to how it currently lies relative to the zAxis in the tangent space normal map? Or will the bisecting angle effectively rotate the mappedNormal about the vertexNormal when rotating the mappedNormal alongside the vertexNormal?

Skewing is part of what is needed for a tangent basis. The purpose of the tangent basis is to transform vertex normals from model space into the space of the texture (or from the texture into model space). If the texture mapping has skewing, then the tangent basis must also have skewing. Otherwise, the mapping doesn’t work.

It isn’t “doing the best with a broken tool”; that’s how the tool works. The transform to the texture is only orthogonal if the texture mapping is orthogonal relative to the surface. If the texture mapping isn’t orthogonal, that’s fine; the math all still works.

So you cannot assume that it will be a pure rotation matrix, and therefore you cannot just use a quaternion.

Also there are many quaternions that map the z basis vector to a given vertex normal vector, e.g. consider a flat surface in the xy plane rotating around the z axis.

In that case your algorithm will allways produce the identity quaternion, because the vertex normal is identical to the z axis vector, so the rotation of the plane is completely lost.

For using quaternions for normal mappings and such… if you want to enforce that the TBN vectors are orthogonal, i.e. reorthogonalize for each pixel, quaternions are great: you have the vs output a vec4 holding the quaternion which holds the TBN and the fragment shader does a normalized on the quat, very cheap. You get essentially NLERP on your TBN’s which is pretty good. However, one thing that is a bummer for quaternions is that they only hold positive oriented orientations, which essentially means that if you look at the 3x3 matrix of a unit quaternion then it is supposed to have determinant 1… which unless you’ve been careful on the model’s texture space often does not happen for all of the TBN vectors of the model…and often enough there are a couple triangles on the mode which have both positively and negatively oriented TBN’s which means somewhere in the triangle, the TBN’s interpolated values in the triangle are linearly dependent.

Alfonse Reinheart:

The skewing you are talking about is that done in the modelling program when the u,v coords are pinned to the texture is it not? But once you have accepted you have done the best you can with the uv mapping, that skewing is irrelevant. Whatever texel you have assigned to whatever uv coord remains fixed, which means whatever normal is assigned to the texel also remains fixed (to the mesh face). The skewing I am talking about is that caused by using a non-orthogonal base frame to rotate a vector from one space to another. I should think that any skewing caused by overstretching a texture across the mesh would be exacerbated rather than relieved by introducing further skewing by using an off orthogonal base frame as a tbn matrix.

mbentrup:

Isn’t this precisely the point. The surface you describe has a normal that points up the z axis, therefore the tbn matrix required is the identity matrix; no rotation of the normal is required. Apart from the different 0 - 255 used in tangent space blue channel, as opposed to 0 - 128 - 255 configuration used in world space blue channel you can map all south facing walls (normals point up the z axis) with tangent space normal maps and treat them as if they were world space normal maps. Photoshop normal mapper, for example, has no idea where the vertices and faces of a cube may lie but you can still use it to create a normal map for the cube. Photoshop works from the assumption that all normals of the cube will face up the z axis and it creates perturbations based upon that assumption. the tbn matrix is a base frame used to rotate photoshop normals so the perturbations map onto the vertex normals of the model as they really are. So if the vertex normals really are pointing up the z axis, then no rotation (or identity rotation) needs be performed.

The skewing you are talking about is that done in the modelling program when the u,v coords are pinned to the texture is it not? But once you have accepted you have done the best you can with the uv mapping, that skewing is irrelevant. Whatever texel you have assigned to whatever uv coord remains fixed, which means whatever normal is assigned to the texel also remains fixed (to the mesh face). The skewing I am talking about is that caused by using a non-orthogonal base frame to rotate a vector from one space to another. I should think that any skewing caused by overstretching a texture across the mesh would be exacerbated rather than relieved by introducing further skewing by using an off orthogonal base frame as a tbn matrix.

If by “exacerbated” you mean “corrected”, then what you’ve said is true. The purpose of having a non-orthogonal basis is to correct the non-orthogonal texture mapping.

Texture space has orthogonal S and T coordinates. S is perpendicular to T. If the mapping that gets you from the mesh to the texture is not orthogonal, then the transform that gets your normals from model space to texture space also cannot be orthogonal.

If the texture space transform is orthogonal when the texture coordinate mapping isn’t, then you get skewed normals from what they should be.

Here’s a perfect example.

Let’s say your normal map gives all of its normals the value (0.707, 0.0, 0.707). And let’s say you’re drawing a square, which is vertical. Your texture coordinates are the four corners of the texture, with S=1.0 being +Y in model space, and T=1.0 going in the +X direction. The plane of our square will be facing the +Z direction; that’s the direction we’re looking at it from.

The tangent-space basis matrix for this mesh is perfectly orthogonal, correct? So when we do our tangent-space transform, the normal we get from the mesh, in model space, will be (0, 0.707, 0.707). Do we agree?

OK, now let’s do something interesting. We are going to apply a skewing transform to the model. This is a perfectly reasonable thing to do, yes? Skewing a model is a functional concept. We are going to skew the model by 45 degrees in the +X direction. Since the square is centered at (0, 0, 0) in model space, this will cause the top half of the square to shift 45 degrees to the right, and the bottom half to shift 45 degrees to the left.

This will be done by standard model matrix manipulation, something you’ll find in any graphics book. The model space normal we get from the tangent-space math is the same; all that changes is the matrix we use to transform it from model space to camera space.

The model space normals were (0, 0.707, 0.707). The post-skewing normals must be this: (0.577, 0.577, 0.577), post-normalization (necessary due to the skewing transform).

Do you agree that this is what we should get? OK, good.

Let’s play one more trick.

We are going to bake the skew into the model itself. We are now going to build a model that is a skewed square. The texture coordinates remain exactly the same; all that changes are the positions. And the tangent-space basis, of course.

If our tangent-space basis remains orthogonal, what normals do we get out of the tangent-space transform? Well, that depends on how we compute the orthogonal basis for a non-orthogonal texture mapping. If we used the previous tangent-space basis, which lines up with the mapping of the T texture coordinate, we get the same normals as before: (0, 0.707, 0.707). If we rotate the basis by 45 degrees, so that the S texture coordinate lines up with the basis, we get: (0.5, 0.5, 0.707). If we go half-way between, then we get: (0.271, 0.653, 0.707).

All of these answers are wrong! We know that they’re wrong. They are wrong because we did not get the same answer we got from skewing the mesh after the tangent space access.

Do you understand now why a non-orthogonal basis is so important?

Yes.

No.

It is the identity matrix only if the tangent and bitangent/binormal vectors also point in the x and y direction.

If you rotate the surface around the z axis you’d use the identity transform for any angle, so the normals wouldn’t rotate with the surface.

As a side point…

Very few models out there deliberately have non-orthogonal TBN-vector sets, very few. Additionally, using a skew matrix for modelview is not really that common either…

Also as a side note, using quaternions for stuff besides TBN does happen, quite a bit at time. Indeed, if one knows apriori that the transformations one has are always orthoginal, then a (vec3, float, quaternion) tuple handles translation, scaling, mirroring and rotation… i.e. an translation + M where M is orthogonal. The main benefit of such in C/C++ code is that composition can be so that the quaternion multiplies normalized the result… this keeps thing much more numerically stable and is faster than reorthogonalizing a matrix.

at any rate making quaternion a built in type in GLSL is useful.

Can’t edit older posts… another nice thing to add: complex types and arithmetic.

Let me share some experience of using quaternions for everything, including TBN.

I provided (quaternion,handedness) pair instead of (normal,tangent,bitangent) in vertex attirbutes. Handedness is either +1 or -1. All vertex shaders linked with GLSL quaternion library, and used it to transform light & camera vectors in the tangent/whatever space. This setup supported everything you could do with orthonormalized TBN, took less space and bandwidth, but obviously required more code.

The export procedure gets especially complex. First, you need to support the case when no UV is given by generating “fake” quaternions that would still provide you with good normals. Second, there has to be a smart pre-processing algorithm performed if you need those quaternions to be interpolation-friendly.

This was a wonderful experience and all, you could even find my article about quaternions in GPU Pro-3. However, for the next project (kri-web) I decided to use quaternions only for complex problems (like skeletal animations) on a GPU side, and stick with a few matrices in most of other scenarios.

In conclusion, I would appreciate the official support for quaternion operations in GLSL, but that’s no longer as actual as it used to for me.