This is actually more than one question so I’ll enumerate:
Now I’ve come to understand, that you need the inverse of the camera view matrix to generate a proper shadow map texture.
But: This Tutorial states that it’s unnecessary. So what is it? His example seems to work fine, but he only uses bias, lightview and lightperspective - so shouldn’t he be stuck in light space whereas he needs camera space?
The other question is: If I need the inverse of the camera viewing matrix, how do I have to rewrite my code, which is very much based on the tutorial.
What up-Vektor does the lightsource need, when it’s positioned above the scene? I used (0,0,-1) because the up-vector mustn’t be parallel to the lookAt vector and negative, because positive looked quite wrong.
I’m experiencing trouble with the shadowed result: It seems like selfshadowing occurs on the dragon model, but I don’t understand yet, how to correct that.
I’m including an image to illustrate. Can anyone tell me, what’s going on?
I’m still fairly new to OpenGL so it’s still a bit hard to understand all the fine print…
The inverse of the viewMatrix is the camera’s position, rotation in world space. He actually is in camera space and doesn’t leave it. I personally do my shadow mapping in world space because it’s easier for me to visualize in my head so in my case I would actually need the inverse of the viewMatrix but I can skip that and just feed in a vec3 of my cameras position instead.
2)Opengl doesn’t have an easy inverse function. I manage all my matrix math so it’s very easy for me to pick out my viewMatrix and call a function (my own math code) to get it’s inverse. Opengl actually likes to think of the viewMatrix and Model/World Matrix as the same thing hence modelview matrix. I would use GLM math library (unless you want to make your own I did to learn them) were it has those functions available to you instead on relying on opengl.
In his tutorial he used +y-axis as his up vector since opengl default up is +y.
When you do the shadow pass you should cull the front faces that way only the back faces are stored getting rid of that problem.
You still have issues with surfaces that are near-tangent to light rays (i.e. near silhouettes, where dot(N,L) is > 0 but still very near 0.
The above trick assumes 3D closed surfaces. You still have issues with two-sided (2D) surfaces (e.g. surfaces you’d render with CULL_FACE disabled and usually two-sided lighting).
And this all assumes you only attenuate the N dot L lighting terms (not anything displayed for light back faces). Otherwise, you’ll get acne for back faces. If you aren’t clamping specular with N dot L for example, your specular can peek around the backside and give you back-face shadow acne in specular areas.
Some judicious use of PolygonOffset helps. There’s also a projection matrix munge trick you can use. And for tangent surfaces, you can fade out lighting and/or fade in shadowing near dot(N,L) = 0.
thank you so far for your comments and suggestions.
Short of the part with the polygon offset, I’ve been able to FINALLY get my shadows into the shape and place they’re supposed to be.
I’ve come to the following, very interesting conclusion, however:
My problem was not specifically OpenGL, but Qt.
You see: I used to build the fbo for the shadow map & the shader program right in the constructor of my Widget. But when it is called, the window still has it’s minimum size and not, as needed for a proper shadow map, the size of the actually displayed window.
So finally, yeah that’s a mess of a code, because it doesn’t take into account future resizes of the shadow map (how should I handle that?) I decided to create my shadow map, when resizing the first time to a size not equal to the window’s min size.
Concluding: I guess that because of the missmatch between shadow texture and real image size, either my shader ran out of depth values to compare to, got misfed with wrong shadow coordinates resulting in a misplaced shadow on and away from my model.
Still: it works. But I didn’t expect complications from QT!!!