Hi,
I am currently working on a small deferred rendering engine just to get in touch with it. It actually works pretty good so far, as long as I write the position of each fragment to a texture.
I read alot about the possibilities to restore fragment position in screen space and thought that would be a good thing to try. (to save bandwith and space for other things in my FBO)
I cam across the paper from leadwerks engine which presented a method that seemed to be easy and fast which is basically this:
uniform vec2 buffersize;
uniform vec2 camerarange;
float DepthToZ(in float depth) {
return camerarange.x / (camerarange.y - depth * (camerarange.y - camerarange.x)) *
camerarange.y;
}
vec3 getPosFromDepth(in float depth){
vec3 screencoord;
screencoord = vec3(((gl_FragCoord.x/buffersize.x)-0.5) * 2.0,((-gl_FragCoord.y/buffersize.y)+0.5) * 2.0 /
(buffersize.x/buffersize.y), DepthToZ( depth ));
screencoord.x *= screencoord.z;
screencoord.y *= -screencoord.z;
return screencoord;
}
//this is how I read the depth and call the function:
float depth = texture2D(depthTex, texCoord).x;
vec3 position = getPosFromDepth(depth);
the camerarange uniform is set to 0.1 (near) and 100.0 (far) which is also what I use to setup my camera.
buffersize is set too 800.0 and 600.0 which is my screensize. Unfortunately it does not give me correct results, here are two images to show the problem. the first one shows the correct positions when rendered to a texture and read. the second one is my attempt to restore the position from depth.
I use a depth texture which I read from in the shader. Am I missing something here? is there an easy alternative to this solution? Any tipps welcome!
Thanks