I’m rendering an underwater landscape in two passes.
The first pass is the simple landscape, while the second is the caustic textures over the landscape.
The problem is that in 16 bit mode everything is fine, in 32 bit mode, there is a mess of z-fighting.
How can i fix it ?
Push the near clip plane further away from the viewpoint.
Or use the stencil buffer to determine where the second layer should be rendered, rather than the depth buffer.
Thanks fellows, but, that won’t help.
Why would i need to move the znear clip plane if it works in 16 bit mode ?
And what is interesting is that in my geforce3 it works great in both modes, on the geforce2 only works in 16 bit mode, in 32 bit mode it fails…,
any ideias ?
Hmm, think I read your question a bit wrong. Do you render the terrain in the first pass, and a second texture ontop of the terrain in a second pass? I thought the second pass was some kind of water plane, and you got Z-fighting in the intersection between this plane and the terrain.
Anyways. It sounds like you indeed have Z-fighting, but in 16 bit mode it’s so much that it looks good (but isn’t). Sounds strange, but I can imagine such a situation.
Does the two passes have identical geometry? That is, do you do anything that can possibly affect the coordinates? For example, doing a translation, and then translation back with negative values does not guarantee identical coordinates.
A screenshot of the problem would be great. If you can’t upload it somewhere, send be a shot by mail (address in profile).
A tip can be to use polygon offset.
I have uploaded 2 screenshots.
The shot were you can see the z-fighting is this one : http://brunomtc.no.sapo.pt/messedZ.JPG
It was taken on a gf2 while in window mode.
The correct one, is here: http://brunomtc.no.sapo.pt/FixZ.jpg
It was taken on the gf3 in window mode too.
I couldn’t take the good shot on the gf2, because it’s having z-figthing on window mode too…, so, on the gf2 the only mode that works is fullscreen 16 bit.
Yes, the landscape is in the same place, i just render it again without anything that could affect the vertices.
Are you rendering the landscape polys in the same order during second pass? And are you using a depth test of GL_EQUAL on the second pass?
Any difference with 24-bit z-buffer depth?
If you increase the color depth, depending on your video card, the z-buffer depth (bitwise) will decrease, thus resulting in less precise values, and that can cause z-fighting.
Ok, i think i fixed it like this :
glDepthRange (0, 0.5);
glDepthRange (0, 1);
No, you haven’t solved this. Your landscape will not depth buffer correctly with other stuff. I dunno why this even worked unless it’s because your caustics are not drawn with your landscape, in which case the caustics will ‘punch through’ both the terrain and other stuff in the scene under a lot of circumstances.
Use glPolygonOffset for this, unfortunately various OpenGL implementors have made a right royal mess of this feature because they can’t read a spec and the glPolygonOffsetEXT confused them. The conformance test is also a bag of bollocks (and still is despite my protestations). Now we may even be in a legacy situation where fixing implementations might break apps. Just experiment with a range of values, and test on different hardware if you can (and curse under your breath that you have to live with the mess).
Here’s a decent demo of this effect with a good animated texture you might want to ‘borrow’, everyone does, it’s owned by Jos Stam but he’s been generous with permission for it’s use in the past.
Depth buffer offsets are a tradeoff between excessive punchthrough and sufficient offset to avoid z fighting. glPolygonOffset when implemented correctly is designed to be the perfect mechanism to reach the ideal compromise.
[This message has been edited by dorbie (edited 01-13-2002).]
Why would you need glPolygonOffset for such a technique ? It’s just a simple multipass effect and rendering the second pass with glDepthFunc( GL_EQUAL ) should do it.
Drawing the exact same triangles should produce the exact same depth values ( according to the GL specs ).
Even the demo that dorbie posted a link to uses glDepthFunc( GL_EQUAL ) for it’s second pass and not glPolygonOffset. Did I miss something here ?
You should test it with a simple draw triangle that start from the back and come to the front. No ?
It’s true that the same triangles MUST return the same depth values… strange…
PH, I’m not sure whether it still works with vertex programs. I don’t think different vertex programs will produce the same exact result even if in theory they should.
Bruno, if you can apply both in one pass, use multitexturing.
Otherwise, solutions have been given: Poly Offset or Stencil Buffer.
It can work in vertex programs. However, I’ve also heard people say that the order of operations in vertex programs can affect the value.
If all your HPOS values are calculated in the same way in all your different vertex programs, then there is no reason why you shouldn’t get perfect multi-pass rendering without Zbuffer artifacts.
This will only become an issue if you calculate your HPOS values slightly differently, or at different places within the actual program for subsequent passes. Or if you mix vertex programs and fixed function pipelines in multipass. This will almost certainly result in the Z values of each pass being different.
I’ve done multipass on terrain, and never had any problems on any Geforce card, in 16bit and 32bit. You do not need to use glPolygonOffset, for simple multipass.
GPSnoopy, you can’t mix vertex programs and ordinary passes, that’s true. But different vertex programs can still be mixed ( if the exact same transformation is used for vertex positions).
PH, gaby, GPSnoopy & Nutty,
in OpenGL you are not guaranteed exact Z fragment reproduction if you change the pipeline path (set of state). You might be OK, but another OpenGL implementation may trip you up later, and if the implementation you are using today is showing a problem then you KNOW that you need to fix it. BLENDING to add a caustic effect on a second pass is potentially the kind of thing that might mess this up on some implementations.
OTOH, this may not be the cause, but I know what the solution is, see my other post.
[This message has been edited by dorbie (edited 01-13-2002).]
Ok, I see what you mean. I just had a quick look in the specs about invariance. I’m surprised blending and depth testing is not required for fragment generation to be invariant but only strongly suggested ( that’s how I interpreted it, that’s what you mean, right ? ).