Clearing the ZBuffer of a PBuffer doesn't work

Hi there… I’m having a pretty weird problem, and want to discard it being a driver problem before I keep struggling my head to try to fix it.

I have a pbuffer implementation (in C#) that works fine on all cases… EXCEPT this one.

I have no clue why, but for some reason, on this pbuffer I have (just one of them, works fine with others), the ZBuffer doesn’t get cleared correctly.

I clear it, and it just doesn’t work, it just clears a band, or squares, or nothing at all, and gives out some garbage on it…

HOWEVER, if I switch the application to other one, it instantly starts working ok.

I’ve tried creating the pbuffer without a depth buffer, and it works correctly (except that I do need a zbuffer for it)… I’ve tried setting different values for the depthbuffer clear (via glClearDepth) and they work fine, except that the garbage will still be there (that is, the parts where it does actually get cleared, gets cleared with the correct values).

The fact that makes me suspect that it’s a driver problem, is precisely that switching the application makes it work just fine (I can switch back to my application and it’ll keep working fine).

I’m using a NV GF5600XT with 61.77 Forceware, using a dual display, although I have the acceleration settings for Single display mode (I do not need the secondary screen to be accelerated).

Here’s a snapshot of the problem:

Of course, nothing is changed in the code between those two snapshots (it’s in fact, the same execution).

The difference on the image comes because it’s moving… the movement works ok.

The lines and garbage you see is isolated to come definitely from the zbuffer having garbage (as switching off the depth buffer testing removes the problem).

I’m definitely clearing it before drawing, depth buffer writes are on, there’s no masking or alpha test going on at all.

I’ve been using OpenGL for 5 years and kinda know all the basic stuff and trust me, I’ve tested everything I can imagine before posting this :frowning:

Thank you!

Maybe a stupid advice. Try to clear two times the zbuffer, like:


Do you ask why I advice you this ? It’s a bug in NVIDIA driver(in newer driver too). Look at window’s style in function where you create window. If you have here SW_POPUP(this is the bug), try setting it to SW_POPUPWINDOW. When you use only popup you get a bad zbuffer(when you draw zbuffer to color you see flickering, and the values are not in [0,+1] but in [0,+0.5]). So try to clear two timer(but you still have bad zbuffer) or set to SW_POPUPWINDOW and you musn’t clear two times and get corrent zbuffer :slight_smile: . If someone is interested in this bug just say.

Try glDepthMask(true) before clearing your buffers.


I see similar problem in game that make one my friend. Problem was bad usage of gluPerspective call. He put 0.1 for near plane and 5000.0 for far plane. When he change near plane to bigger value artifacts are gone.


Thanks for the replies. Although none of them helped so far… I think i’m having some unmanaged memory problem somewhere which is affecting the pbuffer.

@Matt Zamborsky: I already tried clearing several times when creating the pbuffer… although it’s not directly related to a Window problem (because it’s a completely different context, being a pBuffer), plus, Windows are created using Windows.Forms (it’s a .NET application). Nonetheless, clearing the zBuffer twice (or three, or 500 times) didn’t work.

@Jens Scheddin: as I said, there’s no masking going on… the masks are set on my Material or Scene class, and that very same Scene with the very same Materials works on other pBuffers, or rendering directly to the main context. There’s no problem on the rendering of the 3D scene at all (or clearing the buffer). Just in case, I made the test and unset all the possible maskings or tests before clearing the depth buffer… no help :frowning:

@yoyoyo: you can have zArtifacts due to zbuffer precision if you set too distant near and far plane clipping values (I’m not using gluPerspective, I calculate my own perspective matrix from the Camera class tho), but that’s not the problem here… it’s garbage on the zbuffer. The rendering works allright on any other buffer (with the very same precision or bits for colors/zbuffer).

I also tried upgrading my drivers to 71.something (can’t really remember… the latest there are on the nVidia site) and the problem is still there…

Should have to check for memory leaks on the unmanaged part of the application, or maybe just OpenGL trying to read past buffers on locked ones (from managed heap)… I don’t know, but it’s getting me absolutely nuts :frowning:

Thanks for the replies guys, any other help would be appreciated

I’ve uploaded my pbuffer implementation (it won’t compile as is, needs way tons of other classes) to:


The code is quite messy tho

More news… now it just won’t create my buffers, yipee :slight_smile:

It’s returning (randomly) GL_INVALID_ENUM upon the call to wglCreatePbufferARB.

Thing is, watching the ARB implementation of wglCreatePbufferARB function… there’s no GL_INVALID_ENUM error possibility (if we follow that paper, that is).

This is driving me completely nuts

I feel your pain.
Roll back to older drivers.
61.22 seemed to be a stable release (even though they seem a bit long in the tooth now).
Latest official drivers seem screwed for me, thankfully nothing subtle like your problem, more of the blue screen variety when doing something crazy like creating a stereo gl context on a quadrofx1000.
I’m using some beta drivers on a gf6800 at the mo: 76.44, which seem very stable, but are specific for the 6800 as far as I know.

I just fixed the thing… for some reason I had added support (should have been off) for texture_rectangle_nv on the texture implementation, YET I was not taking that into account for the pBuffer attributes or pixelformat (for the BIND_TO_TEXTURE).

I thought I had removed TEXTURE_RECTANGLE_NV support, yet I hadn’t (that is on a completely different part of the code).

I just added support for it on the pBuffer code and it worked fine, on first try.

I wonder why it wasn’t just ****ing up before… probably it was just overwriting memory somewhere and not swapping with the vidcard properly.

Thanks for all the help guys, as always, I feel stupid now :frowning:

(Couldn’t it just -COMPLAIN- instead of going on and ****ing everything up? sometimes I hate OpenGL :frowning: )