Object-Space Transform

Hello,

I’m doing a per pixel lighting demo using core OpenGL 1.3 (multitexturing; dot3; cube mapping).
The problem is whenever an object is transformed to eye-space by the modelview matrix, I have to recalculate the tangent space for each triangle of the object, and it’s slow.
The nVidia demo, whcih I couldn’t fully grasp, transform the viewer and the light position to object space by the inverse of the modelview matrix. The viewer is located at point (0,0,0) and the modelview is only a rotation, then rotating this point will result in the same (0,0,0) point.
What is the correct way to transform the viewer and light positions to object space?

Thanks.

It’s just a multiply by the ModelViewInverse. Like this:

float4 ObjSpaceLight = mul( ModelViewInv, IN.LightVector );

-SirKnight

Since an object is rotated about the origin (viewer), then to transform the viewer to object-sapce it’s rotated about the same axis but with negated angle. Terefore,
the viewer position (0,0,0) will not be rotated around the origin.

Why do you need to transfer the viewer to object space?

-Mezz

Cause I don’t want to transform object corrdiantes and calculate a tangentspace basis for every triangle each frame.

To calculate the inverse of the modelview matrix fast, do the following (assuming no scale in the modelview):-

m = modelview matrix
out = output inverse matrix

out[0]=m[0];
out[1]=m[4];
out[2]=m[8];
out[3]=0.0f;
out[4]=m[1];
out[5]=m[5];
out[6]=m[9];
out[7]=0.0f;
out[8]=m[2];
out[9]=m[6];
out[10]=m[10];
out[11]=0.0f;
out[12]=-m[12]*m[0]-m[13]*m[1]-m[14]*m[2];
out[13]=-m[12]*m[4]-m[13]*m[5]-m[14]*m[6];
out[14]=-m[12]*m[8]-m[13]*m[9]-m[14]*m[10];
out[15]=1.0f;

This way, you don’t need to bother finding determinants or anything fancy.

I think the only thing you need to transform into object space is your light vector.

The only reason I can think of you needing to transform the viewer is if you want to do perfectly true specular, but regular N.H specular doesn’t require the viewer position.

Is that correct?

-Mezz

Originally posted by Mezz:
[b]I think the only thing you need to transform into object space is your light vector.

The only reason I can think of you needing to transform the viewer is if you want to do perfectly true specular, but regular N.H specular doesn’t require the viewer position.

Is that correct?

-Mezz[/b]

nope, H is computed using the viewer position…

That’ll teach me to forget to verify what I say.

For some reason I thought h was just between the normal and the light vector…

-Mezz

btw.: it’s not the phong shading term, which uses H, it’s only an aproximation to the real phong shading model, that uses real Reflection vectors to compute the specular part. And because it’s not really easy to compute the exact reflection vector, Blinn introduced 1977 this calculation method:

L = normalize(Lightpos - point)
V = normalize(Viewerpos - point)

H = (L+V)/2

for objects that are very close to the viewer, you have to compute V for every pixel/vertex.
If the object is far away from the viewer, you can just compute V once for the whole object and reuse it for every pixel/vertex.
you won’t notice the difference…

Ah yes I remember now, for some reason I thought the expensive version (i.e. true phong specular) was having to deal with the viewer position each frame, which was why I went down to thinking that the Blinn method included only the light & normal vectors.
My mistake.

However, I am curious as to implementation details. For instance, where you can put your specular vector - the light vector can be put into the colour at each vertex, but where can you put the specular H vector?

Do you have to use another pass (assuming un-extended OpenGL 1.3)?

-Mezz

Ok. Sorry for not beign able to express the problem precisely, and thus my question is misunderstood.
Assume I’ve the inverse of the modelview, and want to transform (rotate) the viewer which is at (0,0,0), this will not be roated and hence I end with untransformed viewer position.
Thanks.

Originally posted by Mezz:

However, I am curious as to implementation details. For instance, where you can put your specular vector - the light vector can be put into the colour at each vertex, but where can you put the specular H vector?

what about the scondary color ?
with a gf3 or readeon you could also use one of the (additional) texture-units…

[b]
Do you have to use another pass (assuming un-extended OpenGL 1.3)?

-Mezz[/b]

to do correct per pixel lighting you have to compute the diffuse part for every lightsource affecting your point and add them together in the framebuffer. Then Multiply it with the base texture, and after this you have to compute the specular part for every lightsource and add it to the final image.
This is the correct way for doing exact specular lighting.
However, most people compute the specular part in the first pass, and stores the result in the alphachanel. For most applications this is close enough to the correct implementation, but this method provides only monochrome specular lighting (because the different colors of the lightsources are not handled correctly…)

Thanks for the info Adrian,

OK, so I can put it in the secondary colour… secondary colour is still an EXT though isn’t it?

I don’t understand how you could use an additional texture unit when all you need is a colour? Would you need to perform some special lookup?

I see how the specular application can be simplified to be monochrome. I suppose multiple codepaths could take care of how precise you do this calculation.

Wis Mak:
I’m still not perfectly sure I understand your problem, but you keep talking about transforming the viewer, isn’t the view vector you are meant to transform?

-Mezz

No. It’s not the viewer vector that needs to be transformed. It’s the viewer position so that I can calculate the correct view vector to a vertex/pixel required in specular equation.

(0,0,0) is the correct value for your view vector/position in object space.

After getting this, you have to get a view vector for each vertex:

ViewVectorForVertex = ObjectViewVector - Vertex

I Hope this is what you need. It’ll work for other viewer positions as well.

Edit:
Just rechecked the post and this formula had already been posted.

Other thing: The tangent basis can be precalculated before rendering, but the light & viewer vectors need to be calculated every frame if they are dynamic.

After getting them you only need to multiply the result with the tangent basis.

[This message has been edited by t0y (edited 07-20-2002).]

To answer your first question:

If you use non-local viewer and light, then you can transform to object space by just applying the transpose of the upper-left 3x3 of the modelview transform. This is because these vectors are (positionless) unit vectors and rotation is all that matters.

If you’re using point lights or local viewer, then you have to do more math, and normalize, per vertex. This is slow in software (and not that snappy in hardware, either).

If you want a deformable object (skinning or whatnot) then you need to do even more work, as the inverse transform, per vertex, needs to be skinned.

Another approach you might want to look into is doing normal maps in object space instead of tangent space. While this means that the normal maps aren’t usually shareable between different meshes, and you have to use a unique texturing of the mesh, it’s also faster to transform and render. Inverse transforming skinned characters works here, too.

The other benefit of object space normal maps is that you don’t need to send the normal and binormal to the card, saving vertex bandwidth, in addition to saving transform instructions in your vertex shader (assuming you do this in a VS).

Hope this helps.