Deferred Rendering + Normal Mapping + SSAO

Don’t know if this is a beginner or an advanced problem, but here it goes.

During construction of the AOBuffer I need face normals for the normal-orientated hemisphere.

The problem is that during GBuffer pass I store normals using the normal map, so the AO will now read fragment normal not face normal.

I want to avoid attaching another texture to the GBuffer for storing face normals, seems redundant to have 2 textures holding normals.

Is there any way to avoid this?

How do I combine deferred rendering + normal mapping + ssao in a more efficient way?

Thanks!

Why do you specifically need a face normal? This SSAO algorithm, for example, uses per-pixel normals, which you have in your normal buffer.

For example: on triangles facing(parallel) to the camera(view plane): if the sampled normals are skewed, cause of the normal map, I get weird results. I’m using John Chapman’s SSAO technique, yes.

Example: [ATTACH=CONFIG]1330[/ATTACH]

Well, liars get caught eventually. And normal mapping is ultimately a lie: a change to the normal without changing the underlying geometry. The lie will manifest itself in many different ways; this is one of them.

However, I really don’t think it’s a problem. Consider the example you show.

If you used proper surface normals, but the bump map had a dip in it, perhaps an edge of some form, your SSAO would not be able to detect it. Whereas if you use the biased bumped normals, SSAO would detect it just fine.

In any case, if you really want the normal based on the geometry, you still don’t have to write/read it. Just like (I hope) you’re not writing your positions to your gbuffer. Simply compute the normal based on the local depth values. You may need some heuristic to deal with edges between objects.

Good:
gbuffer:
position = V * M * position;
ssao:
position = texture(positionmap, texcoord);

Bad:
gbuffer:
position = M * position;
ssao:
position = V * texture(positionmap, texcoord);

If, during gbuffer pass, I export positions in camera space, all’s nice and well, the shader works. If I export them in model space, then transform them to camera view during ssao pass, everything looks weird.

I know I should use depth buffer to retrieve positions, but for now I need to know what’s happening.

Any ideas?

[QUOTE=radu1986;1280668]Good:
gbuffer:
position = V * M * position;
ssao:
position = texture(positionmap, texcoord);

Bad:
gbuffer:
position = M * position;
ssao:
position = V * texture(positionmap, texcoord);

If, during gbuffer pass, I export positions in camera space, all’s nice and well, the shader works. If I export them in model space, then transform them to camera view during ssao pass, everything looks weird.

I know I should use depth buffer to retrieve positions, but for now I need to know what’s happening.

Any ideas?[/QUOTE]

This issue occurred on the occlusion pass, and my “bad” results were caused by a missing view matrix multiplication when sampling positions using offsets.

float fSampleDepth = mV * texture(u_sPositionMap, vOffset.xy).z

Mean while I changed to “positions from depth buffer” and successfully integrated ssao with deferred renderer.