I am loading a plain and boring triangle model with no texture coordinates and I am (in the process of) implementing Ward’s BRDF in OpenGL. Part of that requires the Tangent and binormal vectors. Most common examples of getting these require texture coordinates.
Knowing that I have the point and normal at the surface, I thus have the definition of the tangent plane, I can then choose any point on that plane and have it as a tangent vector, then use cross product with normal to get the binormal…
Normalizations aside, Does this sit well? I feel like I have I missed something critical?
The tangent and binormal vectors are geometric properties, and can thus be computed from vertex data alone. However, there are an infinite number of orientations for the tangent and binormal directions (any two, non-parallel, vectors in the tangent plane will do). I believe that some authors use texture coordinates to achieve a higher degree of control as to how these two directions are defined.
I am familiar with using tangent/binormal for normal mapping, so I will speak to that:
The tangents are computed from texture coordinates so that we can relate displacements in texture space with displacements in the real world. The tangent is the change in world space that a small displacement in texture u space gives. So adding a red component to the normal will displace in that particular world direction. Similarly with green. Blue corresponds to the physical normal.
Can’t speak to the BRDF model, but the choice of texture coordinates for a cospace to the normal is likely not necessary or useful if textures aren’t involved. You likely need some kind of smoothing property, though, so tangents/bitangents aren’t varying wildly from point to point. This is something the texture atlas gives. Cooordinates alone will be troublesome in this regard. cheers.