eye linear texture coords

If I specify a reference plane for s this reference plane will be multiplied with the inverse of the current modelview matrix, right?

So with a reference plane p with (1, 0, 0, 0)and an inverse modelview matrix with
|1 0 0 -5|
|0 1 0 0|
|0 0 1 5|
|0 0 0 1|

after multiplication I get a reference plane p’ (1, 0, 0, -5), right?

If I now calculate the distance of plane p’ to a point in eye coordinates with (15, 0, -15, 1)^T I get a distance of 115+00+0*(-15)+(-5)*1=10(!)
Shouldn’t the distance be 15?! What is wrong in my calculations?!?!

I’m a lot rusty at this, so forgive me, but when you apply the inverse modelview to the plane, arent you converting the plane from eye space to world space? Then you say the point you are comparing to is in eye coordinates. Either you made a mistake and meant the point was in world coordinates, or you are wrong to be calculating the distance between the world space plane and the eye space point.

Looking more at the numbers you use to calc the distance:

The plane normal is (1,0,0) with a distance of 5 from the origin. Since the normal is (1,0,0) you only care about the x coord of the point, which is 15. So you point has x=15, and the equation of your plane is essentially x=5. 15-5 = 10, sounds right to me

But the distance on the x-axis from the eye-point e(-5,0,5,1)^T in world coords to the point p(10,0,10,1)^T in world coords is 15. This should be the result GL_EYE_LINEAR returns for the s texture coordinate. But if I use the formula in the docs about glTexGen() and GL_EYE_LINEAR I don’t get the distance of 15!

First of all, what are these ^T things that you keep using. Its not a notation that looks familiar to me, which might be part of my misunderstanding.

About the points:
Are you saying (10,0,10,1) is world space coords and (15, 0, -15, 1) is the eye space? The z component doesnt match with your worldview transformation, but Ill ignore that (since that wont matter with the plane we are dealing with).

Actually, Im a little confused about the matrix calculations, and what way you are multiplying things. When you convert the plane from eye to world, it looks like you are using a row vector on the left of the matrix, but when you convert eye coords from eye space to world space it looks like you are multiplying by a column vector on the right of the matrix. Just my observations from trying to reverse engineer your calculations. I’m trying to multiply everything out myself, but Im doing something wrong because Im comming up with something totally screwed up. I dont think my mind is in a linear algebra mood today. I’ll think more about this though.

[This message has been edited by LordKronos (edited 05-30-2002).]

^T means transpose

err… I mean point p(10,0-10,1)^T… sorry

A plane is represented by a row vector so the matrix has to be on the right side. A point is represented by a column vector so the matrix has to be on its left side. This has to be like this…

transpose, duh. It didnt even occur to me. Neigther did the fact that planes and points go on different sides of the matrix (its been a long time since I transformed planes), which explains why all my calculations were crapping out on me.

Anyway, here is what I come up with:

PLANE: world=(1,0,0,-5), eye=(1,0,0,0)
POINT: world=(10,0,-10,1), eye=(15,0,-15,1)

when you dotted (1,0,0,-5) with (15, 0, -15, 1), you had the plane in world space, but the point in eye space.

but something about that still troubles me. the eye plane (1,0,0,0) in eye space goes through the eye, whereas the world plane (1,0,0,-5) doesnt. Or are we supposed to calc distance as Ax+By+Cz-Dw. I’m really starting to think it ISNT a linear algebra day for me.

OK, its slowly coming back to me. When transforming a plane, dont you transform by the transpose of the inverse of the matrix? The transpose thing is just the same as how you switched the plane from a column vector on the right to a row vector on the left. so you still need to invert the matrix. Thus the plane (1,0,0,0) in eye space gets multiplied by the inverse of the eye to world which is:

And we get a world space plane of (1,0,0,5), and that fits everything nicely.

At least, this is starting to fit my fuzzy recollection.

Well, I don’t understand this.
The docs about GL_EYE_LINEAR say that the texture coordinate g = p0x+p1y+p2z+p3w where (p0,p1,p2,p3) is the reference plane after transformation with the current ‘inverse’ modelview matrix and (x,y,z,w)^T is the point in eye coords. So I don’t see a transformation with a ‘transpose of the inverse’ matrix…

Since they say that(x,y,z,w) is the point in eye coords, then the plane (p0,p1,p2,p3) must also be in eye coords when they dot the two together. So I think they mean that (p0,p1,p2,p3) is the world-space-reference-plane after being transformed by the inverse modelview matrix. Strange that they would say it this way, since the plane that you input into eye-linear texgen is already transformed, but thats the only thing that makes sense to me right now.

Have a look at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/opengl/glfunc03_73u6.asp
perhaps you get - I’m totally confused…

I see what you mean, it doesnt make sense. I think their documentation is incorrect. First of all, they say:

(p1’ p2’ p3’ p4’) = (p1’ p2’ p3’ p4’)M^-1
which doesnt make sense unless M is the identity.

Second, they say that “p1, p2, p3, and p4 are the values supplied in params” but then go on to say “Note that the values in params define a reference plane in eye coordinates”, which is contradictory.

So we are already two who think that this makes no sense :slight_smile:
May I also you to have a look at Appendix A of http://developer.nvidia.com/docs/IO/1830/ATT/shadow_mapping.pdf

I took a quick read and I didnt get it all. At the moment I was reading through this document http://www.nps.navy.mil/cs/sullivan/MV4470/resources/projective_texture_mapping.doc

which also comes from nvidia, though Im not sure where it is on their site (I found it through google).

I’ve never been very good at texgen. Ive tried it before, and after a lot of experimenting, the few times I tried it I was able to hack my way to get it to work eventually (you know, like when you matrix transform doesnt work, first you try flipping a sign bit, then you flip 2, that doesnt work so you invert a matrix…etc until it eventually works). When I was done, I never quite understood why it eventually worked. I’ve been meaning to learn texgen inside and out eventually, so this discussion is helping.

Here is what I think is going on (at least what I can pull together from what I read). Keep in mind that EVERYTHING I say below might be wrong.

In 3D, you usually:
start with object space coords
transform by model matrix M to get world space coords
transform by view matrix V to get eye space coords
transform by projection and so on (the rest isnt important for this discussion)

When you use object linear texgen, the texgen takes place before the M or V transformation. When you use “eye” linear texgen, the texgen takes place after the M but before the V transformation.

The tricky thing here is that openGL only has a combined modelview matrix MV. So what happens is that when you specify the texgen plane, that plane gets transformed by V. This means at that very moment, you must have a valid V transformation matrix (ie: camera matrix) as your modelview matrix.
Then when you draw a vertex, it gets transformed by MV. Hence both the plane and the vertex are in eye space when the texture coordinates are generated.

Thats where the tricky part of the name comes from. Eye linear doesnt mean that you specify the plane in eye space. It means that when the texgen takes place, both the plane and the vertex are already transformed into eye space. The clip plane is actually specified in world space (and the vertex was specified in object space).

This brings me to the point I hate to come to, but (sorry about this, especially if Im completely wrong) but I think Cass had a misunderstanding when he wrote that document. He says:

OpenGL will automatically multiply the planes specified with (modelviewpo)-1, i.e. the inverse of the modelview matrix in effect when the planes are specified. From Equation 1 we see that the net effect is to map the vertex position in eye coordinates [xe,ye,ze,we]T back to the ‘object space’
defined by (modelviewpo)-1.

If I am right (and he is wrong) I think his confusion might come from the fact that you are multiplying by the inverse of the modelview matrix. As I mentioned earlier, I seen to recall that transforming a plane requires that you multiply by the inverst transpose of a matrix, not the matrix itself. When you see that you multiply by the inverse modelview matrix, the natrual conclusion is that you are reverse transforming back into world or object space. However, if Im right about this then transforming by the inverse modelview actually allows you to transform from world space into eye space.

I’ll have to sit down later and calculate some of this out and experiment to see if I’m right, but it suddenly makes a heck of a lot more sense to me. Most everything seems to pop into place.

Another note on this. As I said, when you specify the plane for the eye linear transform you are actually specifying it in world space and it gets transformed to eye space by the (inverse of) the current modelview matrix. If you actually want to specify your plane in true eye space, you can just make the modelview matrix the identity before calling glTexGenfv and that will do it for you.

And one again, EVERYTHING above may be incorrect. However, since everything now really fits into place in my head I have this gut fealing that I am correct (or at least mostly correct)

I think pretty soon I am going to try putting together a comprehensive texgen tutorial to go along with my per-pixel lighting tutorials. I learned so much writing those tutorials that I thing the same thing would happen if I wrote some on texgen

I see now that document was written by Cass Everitt, Ashu Rege, and Cem Cebenoyan, (I thought it was just cass) so I extend my potential appologies to them too.

[This message has been edited by LordKronos (edited 05-30-2002).]

First of all thanks for taking so much time for this problem. I’ll have a look at the document and will tell you what I think.

Well, actually I want to thank you for bringing this up. It gave me a chance to learn a ton about texgen, and suddenly I do understand it all. I got a chance to make a really small sample app to play around with texgen modes and stuff, and now I can see accurately how it works. What I said in my last post is pretty much correct.

If you use object linear texgen, none of the matricies matter. The provided object plane is dotted with the vertex coords before any transformation takes place, and that is the resulting tex coord. You could also call this an object space tex gen.

If you use eye linear texgen but have an identity modelview matrix at the time you call glTexGenfv, you are providing the eye plane in eye space (theoretically it is being provided in world space, but since the modelview transformation is the identity it is effectively the eyespace). When a vertex is drawn, it is transformed by whatever the modelview matrix is at that time (so it goes from object space, through world space, and directly to eye space), then the transformed vertex coord is dotted with the eye plane you specified to get the final tex coord. You could also call this an eye space tex gen.

If you use eye linear texgen but you have a view matrix (aka: camera matrix) for you worldview matrix, you actually supply the eye plane in world space. The plane that you provide via glTexGenfv is transformed by the current modelview matrix (actually, since this is a plane it uses the inverse transpose of the modelview, but thats transparent to you), and the supplied plane is transformed from world space to eye space. Assuming you dont reset the modelview matrix from here (because that gets more confusing) and only do additional translation, rotations, and scaling (which would represent the model transformation portion of the modelview matrix) then the vertex drawn gets transformed by the modelview matrix (which is the combined model transformation and view transformation) to take it from object space, through world space, and directly to eye space. Once in eye space it gets dotted by the transformed eye plane to give you the final tex coord. Theoretically, this process is that same as transforming the vertex from object space to world space using the view matrix, dotting the transformed vertex with the world space plane you specified to give us the tex coord, then continuing on by transforming the world space vertex by the view matrix to give us the eye space vertex. Its just because opengn doesnt have separate model and view matricies that it has to go about it using this round-about method. Essentiually, you could also call this a world space tex gen.

You will notice that what I call the eye space texgen is just a specialized case of the world space texgen (which is why opengl only gives us the 2 options). In actuality, you could even simulate object linear texgen using the eye linear, but its less efficient because you would have to reset the texgen plane every time you modify the worldview matrix.

All in all, I think it makes a lot of sense now. Now I just have to teach myself the other modes like sphere map texgen, projective texture mapping, etc. I think all projective texture mapping need is for the worldview matrix to be set to the inverse of the projector’s projection matrix when you specify the plane. This way the vertex goes from object space to world space, to eye space, and gets unprojected to place it into the projector’s eye space. I’ll try to figure this out in my head a little more before I read some of the nvidia docs (for me its always a little more rewarding to figure it out myself, and it clicks a little better).

Anyway, hopefully this helps you out. It sure as heck did for me. If you want a copy of my small little test app to play around with (if you think it will help you), let me know.

Ok, you really write LOOOONG messages :slight_smile: It’s better than these short, not very detailed messages I’m used to!
I also played a little bit with these planes and matrices. And I think we both got the same solution:
You have a point in eye coords and you pass the reference plane in world(!) coords. this plane is multiplied by the current modelview matrix (not the inverse one!!!) which results in a plane at eye position.

Originally posted by gerogerber:
this plane is multiplied by the current modelview matrix (not the inverse one!!!) which results in a plane at eye position.

Just to be clear, since it is a plane it DOES get multiplied by the inverse matrix, but this results in a normal transformation from world to eye, not a reverse transform from eye to world.

To see that this is correct, say we have a worldspace plane of x=2, which would be (1,0,0,-2). If our eye is at (-2,0,0), then we know that the equation for the eye space plane should be (1,0,0,-4). Given that they eye is at (-2,0,0) we know that the view transformation matrix is:

|1 0 0 2|
|0 1 0 0|
|0 0 1 0|
|0 0 0 1|

If you try to mutiply the worldview plane by this matrix, you get (ignore the underscores, they are just for spacing)

|1 0 0 -2| x |1 0 0 2| = |1 0 0 0|
__________|0 1 0 0|
__________|0 0 1 0|
__________|0 0 0 1|

which is incorrect. However, if we multiply by the inverse:

|1 0 0 -2| x |1 0 0 -2| = |1 0 0 -4|
__________|0 1 0 0|
__________|0 0 1 0|
__________|0 0 0 1|

The inverse matrix gives us what we want. So its tricky with a plane because the inverse matrix gives you the forward transformation, and the regular matrix actually gives you the reverse transformation.

And yes, I know I write long responses. I have people complain about that quite often (although I dont know why…I guess they just want an answer but dont really want to learn)

[This message has been edited by LordKronos (edited 05-30-2002).]

YES, you’re right, I have to use the inverse matrix!!! Thanks!!
So the texture coord which is generated is actually the distance from the reference plane in eye space and the vertex in eye space!!!
So let give you an example, too:
eye(-2,0,0,1)^T, point(10,0,0,1)^T in eye coords, ref plane(1,0,0,0) in world coords.
So the texture coord generated for s will be 8!

PS: Please send me your sample app!!!

Yep, that is correct. And for simple cases like that, you can always graph it all on graph paper and verify the distances.

I’ll send that app your way shortly.