i want to switch over to deffered shading. i have setup my fat buffer, but i’m not sure what exactly i have to store and in which way. i guess i need at least diffuse color, normal and depth (to reconstruct the original vertex position).

how to store the normal? should i simply pass the vertex normal as a varying to the fragment shader? or should i have already transformed it into view space or tangent space?

how to store depth? should i use a depth texture to store it or is it sufficent to use the alpha channel of oneof the textures?

I would use all 16 bit floating point render targets.

You could simply store the vertex normal in world space and to the lighting in the deferred pass also in world space.

For depth I would not use a depth texture (I don’t even know if you can easily read from them). You can simply use the alpha channel of the texture where you store the normal for instance

You could store your vertex position as three floats but that is not realy necessary as it is possible to recover the original vertex position from depth only.

I guess You’ll have to look it up in the spec though, I am not sure how DX and OpenGL agree on device coordinaes and depth representation (the w coordinate in particular)

It’s simpler to use calculate the distance from the Z-Value. The direction is already known from the pixel position (if the bounding volume of a light is drawn)

yeah i read that article. but they’re passing the position as a whole. what i like to do is reconstruct it from depth only. atm i need 3 textures to store position, normal and diffuse color. that’s just too much. i can compute normal.z from normal.xy, so i could drop that… if i could drop position.xy as well, then it’d all fit into 2 textures.

DepthParameter.x/.y are two uniform values that are required to calculate the distance from the Depthbuffer value, both are depend to the far and near plane. For more information read that: http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html

unpro.xy/unpro.z is the direction to the pixel. It’s a varying that have to be filled with the worldviewposition in the vertex shader.

i have two projection matrices, a perspective one (when rendering the geometry) and an orthographic one (when rendering the full screen quads). i need to use the perspective one here, right?

i would, but i’d first have to get unpro.x/.y and DepthParameter.x/.y, resulting in more potential sources or error. i don’t care much about speed atm, as long as it works

does the code look ok to you? the position is still dependant on the camera’s position/orientation.

ok, i gave another approach a try, a professor at university suggested the following:

initial pass, vs:
vViewPosition = gl_ModelViewMatrix*gl_Vertex;
fs:
vViewPosition /= vViewPosition.w;
float Distance = -vViewPosition.z;
//then i store Distance in the MRT
deferred lighting pass, fs:
// Depth = distance computed in initial pass, read from MRT
vec3 ray;
float invTanHalfFOV = 1.0 / tan( radians( 22.5 ) );
ray = vec3( ( (gl_FragCoord.xy/vec2(1024, 768)) - 0.5 ) * 2.0, -invTanHalfFOV );
ray /= invTanHalfFOV;
Position = vec4( ray * Depth, 1.0 );

y & z are correct, but x is always a bit too small. It cannot be due to the depth stored in the MRT, as only one component is wrong. so there’s probably sth wrong in the way i calculate the ray. i checked the fov and the resolution… do you have any clue what might be wrong? thanks