light state - is it object or eye space

Ok this may sound kind of stupid or newbie but I seem to be having some problems with my shadow volume program. What i’m doing now its just drawing this volume to see if it’s correct before I do all the stencil stuff. Well most of the time the volume is very screwed up looking, only at a certain position of my light will it look correct. So i’m wondering if maybe its the space my light is in that is causing the problem. Now it said in the infinite shadow volume paper that the light has to be in object space. Yes this makes sence. Ok every frame I change the x and z position of the light with some sin and cos computations to make it orbit the occluder. This light vector never gets multiplied by the MODELVIEW matrix. The position is just changed like I say, then its used to determine which polys are infront or behind the light to determine the sil edge. So my question is, is my light still in object space or does it change to eye space when i move it using sin & cos? I’m thinking it is in object space since I never multiply it by the modelview but for some reason i’m a bit confused. I guess i’m just searching for any possible explanation why my volume looks like crap.

-SirKnight

i just say one word:
PICCIES!
and because you dont understand possibly, again in english:
PICTURES!
hehe
depending on how you generate the volumes, you need to handle that stuff in objectspace (espencially if you extrude in software…)…
for doing so: transform the light by its own matrix (if it has one, hehe ), then inverse transform it by the object matrix. then expand the volume, multiply the scene with the object matrix (if not yet done) and put the shadowvolume in…

Well, I presume you already know this stuff, but just incase you forgot or it slipped your mind, I’ll go ahead and state it anyways…

Where exactly do you want the light? It sounds to me like you have the light in world space, not object space. The modelview matrix is essentually 2 matricies, the object -> world, and the world -> eye matricies, like so

World2Eye * Object2World = Modelview

I am not familiar with alot of shadowing stuff out there, but I assume you want to do the lighting calculation in object space right? So if the light is in world space, you need the inverse of the Object2World matrix right?! Or maybe the inverse-transpose of the Object2World?! One of the 2 atleast. Or maybe you need the calculations done in world space, inwhich you need to multiply both the vertex and the light (if it is in object space) by just the Object2World matrix. Like I said, I don’t really know that much about it, I could be babbling on about nothing.

Hope it helps.

Dan

Thanks for your replys guys. There are a few more things I should have stated also but I forgot. Ok I am using the plain ol opengl lights for one thing. Also, I tried to transform the light by the inverse modelview but the volume got even worse! I first used a matrix inverse function from Nate Miller, then I tried one from the md2shader demo. Both give me different results which make my volume even more screwed.

Ok dave what ill do is post a few pictures on my website and even an exe so you can see what it looks like. I do all the vertex extruding work and whatnot in software too. I’m not using any vertex programs or anything. Just pure ol opengl. My shadow extruding code looks pretty much exactly like that in the shadow volume paper by Cass and Mark. The only difference i think is a few variables might be a diff name. So I’m assuming that is fine. It looks ok.

I thought everything i’m doing is in object space but maybe im over looking something. Ok enough blabbing, let me go post some stuff on my page and i’ll reply back with the links to the goods.

-SirKnight

Ok what I did is put the exe and the source into a zip and uploaded it to my page. This way you can see it in action. You will notice that i am not doing the stencil stuff right now, just drawing the volumes. The stencil code is in there but all commented out.

Ok the url is: http://sirknighttg.tripod.com/shadow.zip

It’s only 200KB so its pretty small. Hopefully by watching it run you can see what I mean. Its wierd b/c at one point, the volume looks ok, but most of the time its not. Sometimes it doesnt even exist!

-SirKnight

Oh ya, you will also notice the lighting of my box (the occulder) is like reversed for some reason. When the light is infront of the box its dark, yet when its behind it, the box is lit up. Opposite of what it should be. My normals seem ok to me though. Silly opengl.

-SirKnight

Yup, this does sound like a newbie question

a point light can be thought of as a vertex. if you call glLight(xxxx, GL_POSITION, xxx) then it gets mult by current modelview.

So now you know your “vertex”'s position, so compute your shadow volume.

Easy, no?

PS: the link is not valid.
V-man

a point light can be thought of as a vertex. if you call glLight(xxxx, GL_POSITION, xxx) then it gets mult by current modelview.

Yes correct. But I do all my computations that has to do with finding if each polygon is facing the light or not and all before that function is called. I just update the position w/ trig, figure out if the polys are front or backfacing wrt the light, then draw the volume and other geometry with the light. But the light pos never gets touched by the modelview before I do the front or back facing test to find the sil edge.

Link dont work eh? Strange b/c it works for me.

-SirKnight

There are 3 spaces, object, world and eye space.

Eye space is fairly clear, it is transformation relative to the eye. World space exists conceptually but really is inconsistent in OpenGL because the modelview has both model and view transforms concatenated.

Object space is transformation w.r.t. the object being drawn (raw untransformed vertex data).

I think your problem stems from confusing object with world space.

For static objects (no model matrix manipulations) world space == object space.

For object under transformation object space exists while the object matrix is on the stack and world space doesn’t exist.

Now for lights in OpenGL (gl lights as opposed to any stencil volume calcs you are doing) lights are transformed through the modelview to eye space. This means that when you specify the light position the appropriate matrix needs to be on the stack for that light WHEN you position it.

Typically lights in a scene are in WORLD space, this means they must be positioned while the view matrix is on the stack but before any model matrix is multiplied on the stack.

Now we come to stencil calculations bump mapping etc, this is where it gets a bit tricky. To position a light in object space where you have the value in world space you must transform the light position through the inverse model matrix transformation of the object in question IF indeed that’s what your algorithm requires. This is usually a software transforation where you have inverse model ONLY, not model*view transform.

I think your problem stems from confusing object with world space.

Now I believe this is what was confusing me. You’re right. Now after your explanation and what dave said (which was similar, about transforming by the inverse object or model if you will matrix) this makes more sence of what the nvidia demo of inf shadow volumes did. This is the code part i’m talking about:

matrix4f ml = object[mindex].get_inverse_transform() * light.get_transform();
ml.mult_matrix_vec(light_position, olight);

This is just what you guys talk about. Before now I looked at that and was like ‘wth are they doing here.’ So I think I see what i’m going to have to do now. I hope the space of my light is my problem with this, my other code looks fine, unless I have overlooked something I have not cought yet. I knew there had to be more to transforming the light to object space than what I knew of. After trying many attempts at using the inverse MODELVIEW and it blowing up on me I figured that what I was transforming by (InvModelView) was not right. Well, comming up with the model matrix I need to transform the light by shouldnt be too bad, right now my occluder is just a simple cube. The thing ill have to figure out now is just how to come up with this matrix. Looks like more digging into some code and this message board and/or google will take place now.

Oh BTW, any one able to get my demo and see what it looks like? If so, looks funky doesnt it. That is probably what it looks like when object space and world space duke it out.

-SirKnight

Opps, forgot to also thank Dan for his input. So Dan, you were not just babbing on about nothing, what you said (same for everyone else) is (more than likely) my biggest problem. Just the space conflicts and the wrong transformations being done to my light and stuff.

-SirKnight

Originally posted by SirKnight:
[b]Opps, forgot to also thank Dan for his input. So Dan, you were not just babbing on about nothing, what you said (same for everyone else) is (more than likely) my biggest problem. Just the space conflicts and the wrong transformations being done to my light and stuff.

-SirKnight

[/b]

Hey, I was just looking for anything better to do than make ER diagrams for my Oracle DBMS class in lab this afternoon Hope you got it all figured out now.

Dan

I saw your demo and it sort of looked like the faces of the shadow volume were being being incorrectly back face culled because of changing winding order, but what do I know I didn’t even look at the code :slight_smile:

Ok you see that code I posed up there from the nvidia shadow demo? (Of course you do ) Well there is something that confuses me. Now dorbie you said in order to take the light which is in world space to object space you have to transform the light position through the inverse model matrix of the object. Ok the thing that is confusing me is in this nvidia code, they compute the inverse model matrix from the object (just the opposite of all the dolly, trackball and pan transforms) and they then multiply that, not by the light position, but by the light transform (all the dolly, pan and trackball transforms i guess done to the light) then take this matrix and multiply that with the light position. The part im a little shakey on is why did they do the light.get_transform() multiply? I thought I would just have to take the inverse model transforms from the object which is casting the shadow and multiply this by the light position to get the object space light.

-SirKnight

[This message has been edited by SirKnight (edited 09-17-2002).]

OMG GUYS GUESS WHAT!!! I JUST got it to render a CORRECT volume! Guess what the problem was. My NORMALS!! Ok see this is what was going on. I had an array of normals. I had one for each vertex but for every quad they were the same of course. Take a look at my code too see what I mean. Ok in the function that stores the normals from this array into my object data structure I was filling it in wrong. So now I fixed that and I have a perfect volume! What I was doing was just staring at the messed up volume with the light at a static loaction trying to figure out why it wasnt right. I looked at it and saw with my eyes where the volume should be extruding from. Then I went into the debugger, stepped through the code that checks if the poly is front or behind the light. After one loop iteration I noticed the normal showing up for the next poly didnt seem right. That’s when it hit me.

But anyway, the discussion about the whole object and world space stuff is still great. I didn’t know some of that stuff and now that I understand that better, I can put some shadow volumes in a more general 3d engine of mine and it should work fine.

Well now that that is done i’m very happy as you can prolly tell. Besides per pixel bump mapping this is one of the coolest advanced 3d things I have done so far. Ok well besides creating a 3d engine and being able to move around in a world i make.

Ok I ask of you one more thing. Could someone still please answer my previous post. Im still curious about what I talked about in there. I mentions this right here because I remember before that after i make a post saying my program works I get no more replys even though I still had a few more questions about things.

Anyway, thanks everyone again. This message board is awesome!

EDIT: Also one thing that wasnt helping my program at all was I had the glVertex calls mixed around. Which was causing a few polys to be culled when they shouldnt have been. I thought what they had in the shadow volume paper wasnt right but now I can see that it is. Silly me!

-SirKnight

[This message has been edited by SirKnight (edited 09-17-2002).]

Hmm now that I enable the stencil stuff to actually make a shadow and not just visualize the volume doesn’t seem to work. It looks like it did when I first ran the program. I still need to do the world->obj space stuff and add that in there but I guess I didn’t so the stenciling stuff correctly. I thought I copied the shadow paper exactly. Back to the drawing board.

Well at least I know my algo for making the volume is ok. Minus the world->obj space calc.

-SirKnight

Originally posted by SirKnight:
Ok the thing that is confusing me is in this nvidia code, they compute the inverse model matrix from the object (just the opposite of all the dolly, trackball and pan transforms) and they then multiply that, not by the light position, but by the light transform (all the dolly, pan and trackball transforms i guess done to the light) then take this matrix and multiply that with the light position. The part im a little shakey on is why did they do the light.get_transform() multiply? I thought I would just have to take the inverse model transforms from the object which is casting the shadow and multiply this by the light position to get the object space light.

Well, I’ve never done this but what you’re talking about is similar to perpixel lighting.

The light position is multiplied with the light transform to get the world position of the light.
After this the world position is multiplied by the inverse object space.
Of course, in the case you describe, both matrix mults are concatenated before they’re used.

If you keep your light in world space (or your light transform is identity) you don’t need the first transform.

I’m a little hazy with this mind you…

I guess the reason why I was a little confused is because I have never used a model transform matrix with a light before. All I have ever done is just change the position of the light manually like I do in my current program there so I never had to bother with multiplying a light model transform matrix with the light position.

So instead of computing a rotation and translation modeling matrix to transform the light with to like move it around and stuff, I just usually did something like this to the light position vector:

lpos[0] = R * cos( t );
lpos[2] = R * sin( t );

Something similar to that.

-SirKnight

I think my GeForce 4 doesn’t like me anymore. Something is really gimped. :-/ I just can’t seem to get the stencil stuff to work at all. I know my volume finding and rendering code is good, I can set #define SHOW_VOLUME 1 and it draws it just fine. This is how I have my stencil stuff setup (I have tried a few other combos too) so tell me if it’s ok:

*Draw my scene to fill zbuffer (GL_LESS used of course)
*Disable depth buffer writes (DepthMask(0))
*Set DepthFunc to EQUAL
*Disable color writes (glColorMask(0,0,0,0))
*Enable Stencil test
*Set StencilFunc to ALWAYS, 0, ~0
*Set StencilMask to ~0
*Set CullFace to FRONT
*Set StencilOp to KEEP, INCR, KEEP

*Determine the facingness of polys w.r.t the light
*Draw sil edges projected to infinity
*Render the volume caps

*Set CullFace to BACK
*Set StencilOp to KEEP, DECR, KEEP

*Redraw sil edges and caps like above

*Enable lighting
*Disable CULL_FACE, or set to BACK

*Set StencilFunc to EQUAL, 0, ~0
*Set StencilOp to KEEP, KEEP, KEEP
*Set DepthFunc to EQUAL
*Restore color buffer writes
*Set DepthMask to 1

*Draw scene

*Now disable stuff that was enabled
*Set DepthFunc to LESS

*Draw GL_POINT to show where light is

Ok so there you have it. It looks fine to me. It’s what the paper shows and what I noticed in Cass’s demo. Well for most of the stuff.

-SirKnight

Depth fill pass - Use LEQUAL depthFunc.
Stencil passes - Use LESS for depthFunc

Works now ?