What I’m attempting to do, is to recreate the world position from a depth texture.
I am using a FBO to render the depth texture and store it, and then I am using a full-screen quad in order to render the shaders.
I am using libgdx for Java, however I cannot find anything relating to this for libgdx. Just hoping an expert can point out a fatal flaw
Here is my vertex shader:
attribute vec4 a_Position;
attribute vec4 a_Color;
attribute vec2 a_texCoords;
varying vec4 v_Color;
varying vec2 v_texCoords;
void main()
{
v_Color = a_Color;
v_texCoords = a_texCoords;
gl_Position = a_Position;
}
And the fragment shader:
#ifdef GL_ES
precision mediump float;
#endif
varying vec4 v_Color;
varying vec2 v_texCoords;
uniform sampler2D u_depthMap;
uniform mat4 u_invProjView;
vec3 getPosition(vec2 uv, float depth) {
vec4 pos = vec4(uv, depth, 1.0)*2.0-1.0;
pos = u_invProjView * pos;
pos = pos/pos.w;
return pos.xyz;
}
void main()
{
float depth = texture2D(u_depthMap, v_texCoords).r;
gl_FragColor = vec4(getPosition(v_texCoords, depth), 1.0);
}
Before, and after I move the camera, respectively:
[ATTACH=CONFIG]1067[/ATTACH][ATTACH=CONFIG]1068[/ATTACH]
It seems to me that there is some thing wrong with the matrices, however I cannot see anything wrong. I am passing in the camera’s inverted projectionView matrix.