Reading the depth buffer into texture memory

Hello,

I’m trying to get a simple shadowmapping demo up and running but I’ve run into a bit of a problem. I need to translate to the light’s position, save the depth values into texture memory and finally generate texture coordinates based on the depth values. Now, my current code isn’t working so I’ve been debugging it the whole day and I suspect the problem is with the transfer of depth buffer info into a texture.

Here’s my code:

init:

glGenTextures(1, &shadowmap_);
glBindTexture(GL_TEXTURE_2D, shadowmap_);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);

render:

// position the light
glLightfv(GL_LIGHT0, GL_POSITION, lightPos_);
	
// set up the projection parameters from the light's POV
glMatrixMode(GL_PROJECTION);
glPushMatrix();
	glLoadIdentity();
	gluPerspective(lightFOV_, lightAspect_, lightNear_, lightFar_);

glMatrixMode(GL_MODELVIEW);
glPushMatrix();
	glLoadIdentity();
	// translate to the light's position
	gluLookAt(lightPos_[0], lightPos_[1], lightPos_[2], -1.0f, 0.0f, 5.0f, 0.0f, 1.0f, 0.0f);

	// render the scene to get the depth information
	renderSceneElements();
glPopMatrix();
	
// end the projection modification
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
	
// copy over the depth information
glBindTexture(GL_TEXTURE_2D, shadowmap_);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, 512, 512);

// render a simple quad with the shadowmap for debugging
glPushMatrix();
	glEnable(GL_TEXTURE_2D);
	glBindTexture(GL_TEXTURE_2D, shadowmap_);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

	glTranslatef(3.0f, 2.0f, 5.0f);
	glBegin(GL_QUADS);
		glTexCoord2f(0.0f, 0.0f);
		glVertex3f(0.0f, 0.0f, 0.0f);

		glTexCoord2f(1.0f, 0.0f);
		glVertex3f(3.0f, 0.0f, 0.0f);

		glTexCoord2f(1.0f, 1.0f);
		glVertex3f(3.0f, 3.0f, 0.0f);

		glTexCoord2f(0.0f, 1.0f);
		glVertex3f(0.0f, 3.0f, 0.0f);
	glEnd();
	glDisable(GL_TEXTURE_2D);
glPopMatrix();

The result is a white quad :confused:

renderSceneElements() contains a bunch of VAOs.

Also, I know there’s a way to copy over the depth buffer using FBOs. I want to implement that afterwards but first I’m curious as to what on Earth I’m doing wrong here.

Thanks in advance!

[QUOTE=dr4cula;1253054]I’m trying to get a simple shadowmapping demo up and running but I’ve run into a bit of a problem. I need to translate to the light’s position, save the depth values into texture memory and finally generate texture coordinates based on the depth values. Now, my current code isn’t working so I’ve been debugging it the whole day and I suspect the problem is with the transfer of depth buffer info into a texture.
[/QUOTE]

Why do you think that the problem is with the transfer? If it’s because it’s a white quad, have you analysed what the expected range of values should be? The mapping between Z and depth is highly non-linear, particularly if the near plane is too close.

Also: try storing your own data in the depth texture, to make sure that your debug code is working.

Thanks for your reply!

I decided to use FBOs: I can actually see the shadowmap when I map it onto a quad (it’s faint but it’s visible at least). However, my problems don’t end there: now the entire scene is black (except for the textured quad and brownish glClearColor() defined background). I’m guessing my texture coordinate generation is wrong but not sure. Any help would be greatly appreciated!

new init:

glGenTextures(1, &shadowmap_);
glBindTexture(GL_TEXTURE_2D, shadowmap_);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);

//glGenRenderbuffers(1, &renderbuffer_);
//glBindRenderbuffer(GL_RENDERBUFFER, renderbuffer_);
//glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, 512, 512);

glGenFramebuffers(1, &framebuffer_);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer_);
//glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, renderbuffer_);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadowmap_, 0);

new render:

glBindFramebuffer(GL_FRAMEBUFFER, framebuffer_);
//glDrawBuffer(GL_NONE);
//glReadBuffer(GL_NONE);

glClearColor(0.5, 0.2, 0.1, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// position the light
glLightfv(GL_LIGHT0, GL_POSITION, lightPos_);

// set up the projection parameters from the light's POV
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluPerspective(lightFOV_, lightAspect_, lightNear_, lightFar_);

glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
// translate to the light's position
gluLookAt(lightPos_[0], lightPos_[1], lightPos_[2], -1.0f, 0.0f, 5.0f, 0.0f, 1.0f, 0.0f);

// render the scene to get the depth information
renderSceneElements();
glPopMatrix();

// end the projection modification
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);

glBindFramebuffer(GL_FRAMEBUFFER, 0);

// copy over the depth information
//glBindTexture(GL_TEXTURE_2D, shadowmap_);
//glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, 512, 512);

// matrix defining the planes for S, Q, R, T components for texture generation
float planeMatrix[16];
glPushMatrix();
glLoadIdentity();
// compensate for the eye-coordinate to texture coordinate conversion: [-1,1] to [0,1]
glTranslatef(0.5f, 0.5f, 0.0f);
glScalef(0.5f, 0.5f, 1.0f);

// do the perspective projection and translate to the light's position
gluPerspective(lightFOV_, lightAspect_, lightNear_, lightFar_);
gluLookAt(lightPos_[0], lightPos_[1], lightPos_[2], -1.0f, 0.0f, 5.0f, 0.0f, 1.0f, 0.0f);

glGetFloatv(GL_MODELVIEW_MATRIX, planeMatrix);
glPopMatrix();

// go from OpenGL's column-major to row-major matrix form
transposeMatrix16(planeMatrix); 

// set up the type for texture generation
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glTexGeni(GL_R, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glTexGeni(GL_Q, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);

// data for texture generation
glTexGenfv(GL_S, GL_OBJECT_PLANE, &planeMatrix[0]);
glTexGenfv(GL_T, GL_OBJECT_PLANE, &planeMatrix[4]);
glTexGenfv(GL_R, GL_OBJECT_PLANE, &planeMatrix[8]);
glTexGenfv(GL_Q, GL_OBJECT_PLANE, &planeMatrix[12]);

glEnable(GL_TEXTURE_GEN_S);
glEnable(GL_TEXTURE_GEN_T);
glEnable(GL_TEXTURE_GEN_R);
glEnable(GL_TEXTURE_GEN_Q);


glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, shadowmap_);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE);

glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);

renderSceneElements();

glDisable(GL_LIGHTING);
glDisable(GL_LIGHT0);

glDisable(GL_TEXTURE_2D);

glDisable(GL_TEXTURE_GEN_Q);
glDisable(GL_TEXTURE_GEN_R);
glDisable(GL_TEXTURE_GEN_T);
glDisable(GL_TEXTURE_GEN_S);

glPushMatrix();
glEnable(GL_TEXTURE_2D);
//glBindTexture(GL_TEXTURE_2D, shadowmap_);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_NONE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE);
glTranslatef(3.0f, 2.0f, 5.0f);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(0.0f, 0.0f, 0.0f);

glTexCoord2f(1.0f, 0.0f);
glVertex3f(3.0f, 0.0f, 0.0f);

glTexCoord2f(1.0f, 1.0f);
glVertex3f(3.0f, 3.0f, 0.0f);

glTexCoord2f(0.0f, 1.0f);
glVertex3f(0.0f, 3.0f, 0.0f);
glEnd();
glDisable(GL_TEXTURE_2D);
glPopMatrix();

Note that I had to explicitly bind the texture - I thought it would be automatically related to the framebuffer and hence enabling texturing would have caused that texture to be used? If I don’t have that binding there, OpenGL selects my previously used texture.

Thanks in advance!

You need to perform the same signed-to-unsigned conversion for the Z coordinate, as the depth values in the texture are returned as 0…1.

Also, you should bind a renderbuffer (or texture) to GL_COLOR_ATTACHMENT0 even if you’re not using it.

[QUOTE=dr4cula;1253110]
Note that I had to explicitly bind the texture - I thought it would be automatically related to the framebuffer and hence enabling texturing would have caused that texture to be used?[/QUOTE]
No. glFramebufferTexture2D() causes rendered output to be directed to the texture (or rather, a specific mipmap level of it). It doesn’t associate the texture with a texture unit. In fact, having a texture used as both a source and destination simultaneously is undefined.

In fact, having a texture used as both a source and destination simultaneously is undefined.

Only if you sample from the same image as you’re writing to. Sampling from one mipmap level and writing to another is fine.

That depends upon how you define “sampling”. My understanding of the 4.3 specification (§8.14.2.1) is that the behaviour is undefined if the attached level is within the range of levels available for reading, regardless of which levels are actually read. So if mipmapping is disabled and the attached level is GL_TEXTURE_BASE_LEVEL, or mipmapping is enabled and the attached level is within the range GL_TEXTURE_BASE_LEVEL to GL_TEXTURE_MAX_LEVEL, the behaviour is undefined.

You know, while I was looking at this part of the spec, something occurred to me. They never updated the feedback language to handle view textures. Just look at the way it keeps talking about “texture object T”; it never takes into account the possibility of “texture object T” having an image attached and reading from “texture object VT”, which is a view of T.

I was behind on my bug quota, not having submitted one since well, yesterday, so I fired that one off.

But in any cause, yes, you must actively prevent sampling being at all possible from any image attached to the framebuffer in order to not hit undefined behavior. That doesn’t mean you can’t sample from the same texture you’re rendering to. You just need to know how to do it correctly.

Though to be honest, it’d be great if the rules were a bit more reasonable. The way it’s specified now, you can’t even access a different array layer in the same mipmap. In fact, it’s undefined behavior even if you can’t render to the attached image (because it’s not in the glDrawBuffers list).

Though I’ll grant that the last may be a performance optimization. To allow that to work, changing the glDrawBuffers set would have to clear the framebuffer cache. And that would kill lots of optimization possibilities.

[QUOTE=GClements;1253116]You need to perform the same signed-to-unsigned conversion for the Z coordinate, as the depth values in the texture are returned as 0…1.

Also, you should bind a renderbuffer (or texture) to GL_COLOR_ATTACHMENT0 even if you’re not using it.
[/quote]

The Red Book suggested translation only in x- and y- directions which I found a bit odd. Changed it to translate in the z-direction as well and added the recommended renderbuffer. However, everything is still black in the scene :confused:

[QUOTE=GClements;1253116]
No. glFramebufferTexture2D() causes rendered output to be directed to the texture (or rather, a specific mipmap level of it). It doesn’t associate the texture with a texture unit. In fact, having a texture used as both a source and destination simultaneously is undefined.[/QUOTE]

Ah, thanks for clarifying!

I’ve uploaded the code to pastebin since the forum editing kinda sucks: http://pastebin.com/G1jT0FfR

To be honest, I’m really confused as to how OpenGL will know how to map the shadowmap to the scene if, for example, I can’t use it for texture mapping a quad. I suppose it’s got something to do with the GL_COMPARE_R_TO_TEXTURE but I’m a bit confused :smiley: Thoughts anyone?

Thanks in advance!

Are you performing a glClear() for the physical framebuffer? I don’t see it in the code, but that may just be because it’s not part of the render function.

You need to call glViewport(0, 0, 512, 512) for the FBO (then set it back to cover the window for the second pass).

The depth is being offset by 0.5 but still scaled by 1.0. All 6 values should be 0.5.

When GL_TEXTURE_COMPARE_MODE is GL_COMPARE_R_TO_TEXTURE, the first two texture coordinates are used to sample the texture, and the third texture coordinate is compared to the sampled value using GL_TEXTURE_COMPARE_FUNC. If the test passes, the luminance (R,G,B), intensity (R,G,B,A) or alpha are one, otherwise they’re zero.

[QUOTE=GClements;1253179]Are you performing a glClear() for the physical framebuffer? I don’t see it in the code, but that may just be because it’s not part of the render function.

You need to call glViewport(0, 0, 512, 512) for the FBO (then set it back to cover the window for the second pass).

The depth is being offset by 0.5 but still scaled by 1.0. All 6 values should be 0.5.[/QUOTE]

Thanks for your reply once again! My viewport and window size are both 512x512 so the calls to glViewport() should be redundant. I added them in (just in case) and nothing changed (as expected). Also changed the scale but nothing. glClear() is called on the physical buffer before entering the rendering state of this particular scene but just in case, I added another clear after switching to it for the second pass.

New render: http://pastebin.com/MHGDxsSf

Any other ideas? :smiley:

Thanks in advance!

You’ll need to post more complete code. There’s nothing inherently wrong with the code you’ve posted, but it’s missing a few key pieces, e.g. the setup of the camera projection, and renderSceneElements().

Here is a working example based upon the parts which you posted:
http://pastebin.com/JQzGr1Rk

[QUOTE=GClements;1253222]You’ll need to post more complete code. There’s nothing inherently wrong with the code you’ve posted, but it’s missing a few key pieces, e.g. the setup of the camera projection, and renderSceneElements().

Here is a working example based upon the parts which you posted:
http://pastebin.com/JQzGr1Rk[/QUOTE]

Hm… I changed my renderSceneElements() to the following for testing purposes and it almost seems to work (screenshot: http://tinypic.com/view.php?pic=14jw8xi&s=5)

       glPushMatrix();
		glBegin(GL_QUADS);
			glNormal3f(0.0f, 1.0f, 0.0f);
			glVertex3f(0.0f, 0.0f, 0.0f);
			glVertex3f(0.0f, 0.0f, 10.0f);
			glVertex3f(20.0f, 0.0f, 10.0f);
			glVertex3f(20.0f, 0.0f, 0.0f);
		glEnd();

		glTranslatef(10.0f, 0.0f, 5.0f);
		glBegin(GL_QUADS);
			glNormal3f(1.0f, 0.0f, 0.0f);
			glVertex3f(0.0f, 0.0f, 0.0f);
			glVertex3f(0.0f, 0.0f, 2.0f);
			glVertex3f(0.0f, 2.0f, 2.0f);
			glVertex3f(0.0f, 2.0f, 0.0f);
		glEnd();
	glPopMatrix();

As for the camera’s projection stuff, I just use gluLookAt() from a set of calculated vectors based on the camera’s pitch, yaw and roll. I’ve got a collection of scenes that I can switch between and this is where the camera’s projection is set up:

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear The Screen And The Depth Buffer
	glLoadIdentity(); // load Identity Matrix

	// position the camera
	Vector3 position = cam_.getPositionVec();
	Vector3 lookAt = cam_.getLookAtVec();
	Vector3 up = cam_.getUpVec();
	gluLookAt(position.x_, position.y_, position.z_, lookAt.x_, lookAt.y_, lookAt.z_, up.x_, up.y_, up.z_); //Where we are, What we look at, and which way is up
	

	// check for polygon mode
	if(wireframe_) {
		glPolygonMode(GL_FRONT, GL_LINE);
	}
	else {
		glPolygonMode(GL_FRONT, GL_FILL);
	}

	// render the currently selected scene
	p_currentScene_->render();

And that’s it. From there it goes into the render code that I’ve posted.

Thank you so much for your help already! I’m just completely stumped as to why I’m getting these odd results…

EDIT: realized I forgot to post the overall projection stuff (this is set up only once and called only again if the window is resized):

void WindowHandler::ResizeGLWindow(int width, int height) {
	if (height==0) { // Prevent A Divide By Zero error
		height=1; // Make the Height Equal One
	}

	glViewport(0,0,width,height);

	glMatrixMode(GL_PROJECTION);
	glLoadIdentity();

	//calculate aspect ratio
	gluPerspective(45.0f,(GLfloat)width/(GLfloat)height, 0.1 ,1500.0f);

	glMatrixMode(GL_MODELVIEW);// Select The Modelview Matrix
	glLoadIdentity();// Reset The Modelview Matrix
}

EDIT2: Now I’m even more confused. Decided to make the “floor” a bit more detailed and then this happened instead (on the right): http://i42.tinypic.com/zxwbuu.png

I swapped the single big quad with this:

glPushMatrix();
			for(int i = 0; i < 20; i++) {
				glTranslatef(1.0f, 0.0f, 0.0f);
				glPushMatrix();
					for(int j = 0; j < 10; j++) {
						glBegin(GL_QUADS);
							glNormal3f(0.0f, 1.0f, 0.0f);
							glVertex3f(0.0f, 0.0f, 0.0f);
							glVertex3f(0.0f, 0.0f, 1.0f);
							glVertex3f(1.0f, 0.0f, 1.0f);
							glVertex3f(1.0f, 0.0f, 0.0f);
						glEnd();
						glTranslatef(0.0f, 0.0f, 1.0f);
					}
				glPopMatrix();
			}
		glPopMatrix();

What on Earth is going on?

FWIW, I find that the Z component of the glTranslate() call needs to be a fraction below 0.5 to avoid depth-fighting. I used 0.499 in the example I posted, but the optimum value depends upon the near plane and other factors.

[QUOTE=dr4cula;1253228]EDIT2: Now I’m even more confused. Decided to make the “floor” a bit more detailed and then this happened instead (on the right): http://i42.tinypic.com/zxwbuu.png

I swapped the single big quad with this:

glPushMatrix();
	for(int i = 0; i < 20; i++) {
		glTranslatef(1.0f, 0.0f, 0.0f);

[/QUOTE]
You can’t change the model-view matrix when drawing the scene, because you’re setting GL_TEXTURE_GEN_MODE to GL_OBJECT_LINEAR, so the texture coordinates are based upon the values passed to glVertex() without any model-view transformation applied, and planeMatrix only includes the “camera” transformations (i.e. those from the gluPerspective() and gluLookAt() calls).

You would need to either switch to GL_EYE_LINEAR (and omit the model-view matrix from the calculation of planeMatrix), or apply every transformation to both the model-view matrix and the texture matrix simultaneously, or update the tex-gen planes whenever you update the vertex transformation, or transform the vertices in the program before passing them to glVertex().

Or you could switch to using shaders, where you get to control the transformations directly.

[QUOTE=GClements;1253230]FWIW, I find that the Z component of the glTranslate() call needs to be a fraction below 0.5 to avoid depth-fighting. I used 0.499 in the example I posted, but the optimum value depends upon the near plane and other factors.

You can’t change the model-view matrix when drawing the scene, because you’re setting GL_TEXTURE_GEN_MODE to GL_OBJECT_LINEAR, so the texture coordinates are based upon the values passed to glVertex() without any model-view transformation applied, and planeMatrix only includes the “camera” transformations (i.e. those from the gluPerspective() and gluLookAt() calls).

You would need to either switch to GL_EYE_LINEAR (and omit the model-view matrix from the calculation of planeMatrix), or apply every transformation to both the model-view matrix and the texture matrix simultaneously, or update the tex-gen planes whenever you update the vertex transformation, or transform the vertices in the program before passing them to glVertex().

Or you could switch to using shaders, where you get to control the transformations directly.[/QUOTE]

I tried switching to GL_EYE_LINEAR, however if I omit the gluLookAt() (which is the model-view part of the planeMatrix) then I’m not getting the results I’m looking for. If I keep that there then it sorta looks OK (the shadow is translated away from the object for whatever reason). I tried this version with my full scene as well and the shadowmap was all over the place there. If I can get the shadowmap working for these 2 panels then I can start looking into the construction of the renderSceneElements() more critically but as it stands now, I’m not still not happy with the two-planes result: http://tinypic.com/view.php?pic=16ive6e&s=5

Did I even understand your idea with the GL_EYE_LINEAR correctly? The reason I’m going with this solution is that it seems the easiest out of the other options in the fixed-function pipeline OpenGL.

Thank you so much for your help in advance!

EDIT: So I tried adding 2 cubes to the scene (GL_EYE_LINEAR with gluLookAt() in planeMatrix) and if I had only 1 cube, it looked OK. Once I added another one of the following happened: 1) the 2nd cube didn’t cast a shadow, 2) the 2nd cube casted a massive shadow. Here’s what I mean: http://i40.tinypic.com/smghtf.png

Thanks in advance!

[QUOTE=dr4cula;1253262]I tried switching to GL_EYE_LINEAR, however if I omit the gluLookAt() (which is the model-view part of the planeMatrix) then I’m not getting the results I’m looking for.

.
My mistake. planeMatrix needs to contain the part of the model-view transformation which is specific to the light, but not subsequent transformations which are applied to the object.

[QUOTE=dr4cula;1253262]Did I even understand your idea with the GL_EYE_LINEAR correctly? The reason I’m going with this solution is that it seems the easiest out of the other options in the fixed-function pipeline OpenGL.

I’ve posted an updated version which uses eye-linear coordinates. The object can be moved/rotated using shift/control and the arrow/page keys.

The eye planes are transformed by the inverse-transpose of the model-view matrix at the point that glTexGen() is called, so the model-view matrix needs to be set to the identity matrix at that point (at least, it matters what it’s set to; see below).

The bottom line is that the texture coordinates actually used for the lookup in the second pass need to exactly match (other than the 0…1 -> -1…+1 conversion) the clip coordinates from the first pass. So any transformations which are applied to the vertex coordinates in the first pass must also be applied to the texture coordinates in the second pass.

“Constant” transformations (i.e. the perspective and look-at transformations which define the view) are dealt with by planeMatrix, but dynamic transformations (transforming objects within the scene) also need to be included, and using eye-linear texture generation does that.

I think that if you want to apply a gluLookAt() for the camera, you will need to have that transformation in place for the glTexGen() calls, so that using eye-linear coordinates doesn’t result in it being applied twice.

Ok, so the only difference I could find between our codes was the light FOV. As soon as I changed it to 90.0, the massively long shadows disappeared. This is why I think it was happening: due to the shadowmapping method, everything behind an object from the light’s limited POV is shadowed, hence the long shadows from different angles than the light’s original angle. Kinda hard to explain what I mean :stuck_out_tongue: But yeh, once I added the call to glLoadIdentity(), the shadow map got stuck in the camera and floated around with it. But like you said, the gluLookAt() for the camera needs to be in place before glTexGen() calls (which it was anyways due to the program’s setup) so all I had to do was fix the angle (besides GL_EYE_LINEAR mapping).

Now, I thought I was done with the problems but ran into 2 odd artifacts:

  1. shadows seem to be translated a bit from the object that casts them: changing the light’s near plane changes this. Going from 0.1 to 1.0 gives perfect results distance wise but produces another problem: incorrect texture mapping on some objects. Here’s what I mean: http://i40.tinypic.com/an1un9.png
    Also, this is independent of the distance to the light source as there’s another cube in the scene further back that has the same problem with the top face texture.

  2. there are weird mappings behind the light: http://i44.tinypic.com/2qnnpjm.png
    One way I can think of to remove those mappings is to disable texturing before rendering the back wall but that doesn’t seem like the best idea.

I can’t thank you enough for your invaluable insight! Hope you can help me cross the finish line! :slight_smile:

The FoV angle only affects how much of the scene gets rendered. So long as both the shadow caster and shadow target fit within the frustum used in the first pass, the FoV angle won’t have any effect.
However, if any part of the scene lies outside of the frustum, then you’ll be getting depth values based upon the texture’s wrap mode, which will invariably produce the wrong results.
Essentially, the frustum used for rendering the depth map needs to encompass all objects which can cast or receive a shadow which is within the camera’s frustum. For a simple scene, you can just set the camera’s frustum so that it bounds the scene. For more complex scenes, it’s common to use multiple depth maps, with one covering the entire region of interest, and another only covering areas closer to the viewpoint. The former is used as a fall-back if the texture coordinates for the former are out of range.

[QUOTE=dr4cula;1253309]
Now, I thought I was done with the problems but ran into 2 odd artifacts:

  1. shadows seem to be translated a bit from the object that casts them: changing the light’s near plane changes this.[/QUOTE]
    Offsetting the Z translation to avoid depth fighting can cause this.

The ratio of the far plane to the near plane determines the degree of non-linearity in the depth buffer. Too high a ratio will result in nearly all of the depth range being used for points close to the near plane, resulting in a loss of depth precision for the rest of the scene.

The problem can be avoided by using an orthographic projection for the light (i.e. a directional light rather than a point light), or using a linear depth buffer (which requires shaders).

[QUOTE=dr4cula;1253309]but produces another problem: incorrect texture mapping on some objects. Here’s what I mean: http://i40.tinypic.com/an1un9.png
Also, this is independent of the distance to the light source as there’s another cube in the scene further back that has the same problem with the top face texture.[/QUOTE]
This looks like depth fighting. When using a reciprocal depth buffer, the Z offset has to be tuned based upon the various parameters (light distance, near/far plane distance, scene dimensions, etc).

Anything which is outside of the frame rendered in the first pass will be wrong. If you’re using point lights which are “inside” the scene, things get more complex. Using a cube map should be viable, but it requires rendering 6 views for each light, and I don’t know whether it can be done without using shaders.

Thanks for explaining everything in such detail! Really appreciate it.

[QUOTE=GClements;1253319]
This looks like depth fighting. When using a reciprocal depth buffer, the Z offset has to be tuned based upon the various parameters (light distance, near/far plane distance, scene dimensions, etc).[/QUOTE]

Yep, I thought that as well first but then I enabled multitexturing and mapped the shadowmap onto texture unit 1 and I’m still getting the same odd pattern: http://i39.tinypic.com/opcjkn.png
The image on the right is the top face of the cube (kinda hard to see but it’s there).

EDIT: or actually wait, I was getting z-fighting beforehand with just TU0 as well (0.499 modification)… So what, am I casting shadows on top of each other or?

EDIT2: Nevermind, I moved the light in the y-direction and it works fine now: http://i44.tinypic.com/33ug6rq.png

There aren’t enough words to describe how grateful I am for your help GClements: seriously, thank you so much. The internet needs more people like you :smiley:

Sorry if interrupting.
Not having done shadow-mapping I’ve got a question: In the image linked above the shadow cast from the box seems to jump out of the plane it is projected on. Is the image just tricking my eye?
Would the density of the shadow vary based on the distance between the shadowing surface and the light source in nature because of light-Diffusion?

Which one?

Possibly, or it might be caused by the depth offset required to avoid depth fighting.

In its simplest form (used here), shadow mapping results in hard shadows, although there are various techniques which can be used to soften them.