Tangent or object space

The procedure is this:

And what about this process is better than tangent space?

Tangent-space normal maps work. Even if the procedure you outline does work, it will not work “better” by any objective measurement. It will not make the rendered graphics better or more accurate. There are no artifacts in tangent-space normal mapping that your process will remove. It will not make your renderer faster. It will not make your renderer easier to use. Any material characteristics of the surface will work exactly the same regardless of what space you do your lighting computations in.

And your procedure still misses the reason why tangent-space is used: because you need a way to define the orientation of the bump map relative to the points on the surface. Your procedure still doesn’t work if you change the texture mapping, because your offsets or whatever that are stored in the texture are not oriented based on a neutral coordinate system. They are oriented based on the specific mapping that was used to construct the texture.

I don’t believe tangent space is the right methodology for me.

But you haven’t given a real reason as to why it isn’t. Besides this ephemeral notion of tangent-space being “abstract” or some such.

The way I visualise curved fourth dimensional space and time dilation is exactly the same way I visualize the time it takes to get from a to b, or the ability to visualise how I could hit a tennis ball that came off a server’s racket.

Then I submit that you are not actually visualizing curved space-time. You’re visualizing the effects of curved space-time (ie: gravity), not the actual curvature itself. That’s a Newtonian representation of a relativistic concept; it simply doesn’t allow for understanding the whole picture.

You don’t even need to store a TBN matrix per vertex, only the good ol’ gl_Normal; as you can easily recompute the TBN in either fragment or geom-shader.

I’ve never heard of that being done before. How do you go about doing that?

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=282834#Post282834

Thankyou Ilian for the link. Getting hold of the TBN per vertex is one of the prime black marks against tangent space.

Alfonse you are right that storing vector offsets for object space will not work because all objects share the same coordinate frame as you tried pointing out earlier, thus a normal map on a floor containing an offset of -1 down the x axis if applied to an east wall will simply extend east wall normals further down the negative x axis. The problem lies, however, in storing the offset angle in an object space defined vector… the vector does not preserve its meaning once applied to another oriented normal, the angle however, does. So that should be encoded into the normal map.

Getting hold of the TBN per vertex is one of the prime black marks against tangent space.

Ignoring Ilian’s automatic method, getting the tangent and binormal vectors for a vertex is a matter of simple math. It’s something that happens in the tool pipeline as part of processing your vertex data; it’s nothing anyone has to actually do. If that was what you consider a “prime black mark,” a simple matter that tools handle completely transparently, then you perhaps need some perspective on what is easy to do in rendering and what is not.

the vector does not preserve its meaning once applied to another oriented normal, the angle however, does.

No, it doesn’t. An angle doesn’t have a meaning unless an axis of rotation is specified. That axis can be explicitly defined or, as in your case, implicitly defined. Either way, the axis must be in a space. And since you’re working in object space, that axis is in object space.

Which means that there is no difference between an angular rotation and an offset in terms of this problem. 10 degrees rotation about the object space X axis is no different than the equivalent directional offset in its results.

It is the tooling pipeline that is the biggest hindrance to using TBN matrix. Blender 2.5, for example (and as far as I can see from googling the matter) does not export tangents. Specifically, the API does not seem to support gathering per vertex tangents from the database (if they are there), and exporters do not seem to export tangents. Prior to thinking about object space normal maps I was creating an export script to export tangents and bitangents. The problem was that I was getting odd results from shader programs using the exported values and have assumed the problem lies with the script. The script exports the models into jogl as it ought so utter uselessness on my part does not seem to be the problem.

Am I right in this: tangent space natively maps all z values down (0,0,-1). The TBN matrix is the conversion required to move the vertex to (0,0,0) with its normal pointing (0,0,-1), and rotated about the z axis so its uv u value lines up the object space x axis, and uv v values line up with the object space y axis (given that all objects share the same coordinate frame we are really talking about world space)?

If this is the case, then there has also to be mediation using the vertex position attributes and uv coord values because uv coord values do not exist in object space, only vertex positions do, but there is a one to one mapping between uv coords and vertex positions.

Thus, if the fragment shader assumes every fragment is at (0,0,0) and faces (0,0,-1) and we use the TBN matrix to bring the light and eye vectors into positions relative to the fragment, the normals stored in the map will be accurate. Assuming these normals have all been generated with the pixel at (0,0,0) and facing (0,0,-1).

Blender 2.5, for example (and as far as I can see from googling the matter) does not export tangents

Of course it doesn’t; that’s not its job. The purpose of Blender exporters is to dump the data that Blender has.

In general, after exporting a mesh, you run the exported data through your asset conditioning pipeline in order to generate data that your application will use. Here, you do things like compress the vertex data from floating-point values to normalized integers (as needed). And this is generally where the tangent and binormals are generated. By not putting this in the exporter, your tool pipeline is not as bound to a specific modelling application. It also means that you can re-condition the mesh at any time, if you want to try different techniques and so forth. It also makes debugging so much easier.

Am I right in this: tangent space natively maps all z values down (0,0,-1). The TBN matrix is the conversion required to move the vertex to (0,0,0) with its normal pointing (0,0,-1), and rotated about the z axis so its uv u value lines up the object space x axis, and uv v values line up with the object space y axis (given that all objects share the same coordinate frame we are really talking about world space)?

The position is never affected by tangent-space computations. You can use tangent-space computations in two ways. You can transform direction vectors (the light direction) into tangent space and do the various lighting computations there. Or you can transform the normals from the tangent-space normal map into object space, and from there do lighting in whatever space you want.

But in neither case are you required to transform positions into tangent space. Indeed, since tangent space is often skewed relative to object or camera space, that would make attenuation distance and possibly the light direction come out wrong.

The tangent and binormal vectors point are object-space vectors that point in the direction of the U and V for that particular texture mapping. Combined with the normal, they create the three basis vectors that define a transformation from object space to tangent space. When the vectors are transformed into camera space, they define a transformation from cameras space to tangent space.

And generally, the “up” direction in tangent space is positive Z, not negative Z.