I am trying to implement shadows with cubemaps but I ran into some problems. I put up the code, some screens and a little movie to show you what I mean. I have removed textures etc to minimize the code. Available here:
The first problem is shown in pointlight_Error2.jpg. Every object that I draw gets drawn half in black and half in white?
Another problem is shown in Pointligt_Error1.jpg. Whenever shadows cast on the floor, the shadow
disappears depending on the rotation of the object. But when shadows cast on any other side of the room it works fine. The movie shows this better. The red cube is the cubemap.
When I draw the cubemap I use Distance.vert and Distance.frag to draw the square distance. And then Shadows.vert/Shadows.frag to draw with lighting and shadows. GL_RGB16F_ARB is used as internal format of cubemap texture.
Any help is appreciated!
Thanks in advance
You are using incorrect space to calculate the cubemap lookups and lighting. The light position and camera positions appear to be in world space yet the shaders use them as if they were in object space and then use the result to access world space aligned cube.
This is also most likely the reason for holes in shadows - seems like you misunderstood gl_Vertex - this is exactly what you pass from application when you call glVertex - it’s not yet affected by any matrix. You need to transform it by modelview matrix before you compute distance.
Thanks for pointing this out to me! I have a hard time understanding all these spaces. I seem to mix them together. There is camera space, world space, eye space (maybe the same as camera space?), object space, texture space etc.
If you know any tutorials and/or images to show the different spaces it would be great. Cause sometimes I think I understand but then it turns out I don’t…
I’ve done the lighting in world space now (since cubemap is world space aligned) as you guys said. Or at least I think I did because there is still problems, but the lighting seems to work now (if I remove shadows).
I put up another video and code to show you what I mean:
When multiplying gl_Vertex with modelview matrix it gets in eye space. So I stored the inverse view matrix in the texture matrix. And multiply the eye space vertex with that to get back to world space.
Everything works fine if I move the camera to the lights position? As you can see in the video the shadows are working when doing that, even on the bottom face which was the problem in my first post. Do you know what I’m doing wrong here? Probably a lot
You store the inverse of the view matrix only during the shadow application. When you render individual faces of the cube, you do not set the correct inverses of appropriate view matrices so world space coordinates used to calculate distances inside the shadowmap are calculated incorrectly.
Yes that was it! It works now, thanks a lot Komat. I should have thought of that
I tried implementing your code on my Quadro FX 4000 but the driver is complaining that it’s using an unsupported format…
Can anyone confirm that cubic floating point FBO attachments aren’t supported on NV40-based cards?
flamz: You can use integer RGBA textures and pack depth into the color channels. I followed the gpu gems article and got it working when doing this in the distance.frag:
varying vec3 vLight;
float sqDepth = dot( vLight, vLight );
float frDepth = fract(sqDepth);
gl_FragColor = vec4(floor(sqDepth) / 256.0, frDepth, frDepth, frDepth);
And then unpack it in shadows.frag:
const vec3 unpack = vec3(255.0, 1.0, 0.0);
vec3 cubeShadow = textureCube(uShadowCube, (vLight.xyz)).rgb;
float cubeShadowUnpacked = dot( unpack.xyz, cubeShadow.xyz );
float shadow = (distance < cubeShadowUnpacked) ? 1.0 : 0.4;
I don’t really get why this works though. Got to read more.