How do you enable a w-buffer?

Do you have to use an extension of some sort? In Direct3D it’s really simple: m_Device->SetRenderState( D3DRS_ZENABLE, D3DZB_USEW ). I’m sure it’s more complicated with OpenGL, but how do you do it? The Red Book does not even allude to the existence of w-buffers.

AFAIK OpenGL really only deals with a “depth buffer”. It does not specifically say that it’s Z or W (only that “very far away” = 1.0, and “very close” = 0.0). I have found no way to controlling the depth buffer format under OpenGL. Perhaps some drivers allow changing this on a global basis (in display settings under Windows, for instance).

Well, there’s no option for me (with a GeForce2 Go) to select whether OpenGL uses a z-buffer or w-buffer. But I’m sure OpenGL must support this somehow, right?

There is probably an OpenGL extension for w-buffer support, but none that I’m aware of.

You could implement your own with the depth replace texture shaders on GeForce3 and GeForce4 Ti.

Thanks -
Cass

I have a GeForce2 which doesn’t support pixel shaders, but anyway I’d want to support w-buffers on any card that allows it, not just cards that have pixel shading.

>>I have a GeForce2 which doesn’t support pixel shaders, but anyway I’d want to support w-buffers on any card that allows it, not just cards that have pixel shading.<<

i dont believe all cards support it (even with d3d) so it looks like youre stuff with the zbuffer
which imho is a lot better than a wbuffer.
if u want more precision at far ranges of the zbuffer theres a few things u can do.
search this group for examples

I wasn’t planning on requiring w-buffers, but I should allow it for all cards that support w-buffers without necessarily supporting pixel shading, was my point.

Anyway I read a tutorial online and discovered that w-buffer are inaccurate at close range, which is definitely Not Good. So I’ll just stick with a z-buffer, I guess, and see how good/bad the w-buffer is in D3D.

It is my understanding that the precision of the W buffer is equal regardless of distance, while the precision of the Z buffer is better at close distances and worse at far distances. In general, you want the Z buffer behaviour (closer is more important).

Cass, when I was writing a Glide -> OpenGL wrapper, I was searching high and low for an OpenGL extension that allowed selection of depth buffer format (in Glide you can select W or Z), but I didn’t find any.

[This message has been edited by marcus256 (edited 04-16-2002).]

You’re right, Marcus; that’s what I meant. The W-buffer is more inaccurate at close distances than the z-buffer. The inaccuracy is even throughout, since it’s just the floating-point precision that limits it, but close-up it might be very noticeable. For the terrain it won’t be noticeable, but for other objects it probably will.

CGameProgrammer,

Careful. W varies linearly in eye space, so if you stored it in fixed point, the accuracy is independent of the depths being resolved. If you store W in floating point, you do change the way precision is distributed. Floating point naturally packs more precision toward zero. That’s why some people advocate having a floating point z buffer with a glDepthRange(1,0). The idea would be to try to balance the uneven distribution of the z buffer with the uneven precision distribution of floating point.

Thanks -
Cass

The relationship would seem arbitrary if you do this depending on where near and far clip planes landed…but I suppose it’s already arbitrary.

How about a floating point W buffer. You do still want and need more precision towards the viewer. FP W buffer would give a more consistent and pleasing depth precision distribution over a broader (and more typical) range of near and far clip values without the nasty side effects when you have the near clip too close to the eye, which is the real reason most people have precision problems with Z buffer.

[This message has been edited by dorbie (edited 04-16-2002).]

The w-buffer is always floating-point, which is why I was thinking of using it. W = 1/Z, so if W was an integer, it would always be zero.

CGameProgrammer,

There’s nothing that says that w-buffering must be done in floating point. That may be the way it’s done in Direct3D, but that doesn’t mean it can’t be done in fixed point.

Cass

Actually you’re right, I was thinking of taking the inverse of an integer… but just as the floating-point Z coordinates are scaled to (0…65535) or (0…16777216), so too could the W-coordinates be converted to fixed point.

There’s a paper about the various z-buffer and w-buffer precision issues (and the glDepthRange-hack cass talks about), it’s called “Optimal Depth Buffer for Low-Cost Graphics Hardware” (look for this on Google). Might give some more insight…

Michael

The issue is how W is stored in the framebuffer and used for comparrison, not simply what comes off the transformed vertex.

As Cass says the stored representation is everything and a float representation isn’t a given. That takes floating point fragment interpolators. It is wrong to describe a w buffer as linear if it is stored as a float.

My apologies to Cass, I never read your post fully when I wrote what I did. A fp W seems intuitively to have some inherently attractive properties, although perhaps a scaled fp ‘linear’ value between a zero near value and any programmable far value before you store to limited precision would permit almost complete control of precision throughout the ‘linear’ range.

It could be a lot of extra work though.

[This message has been edited by dorbie (edited 04-17-2002).]

I have to ask, why are there no 32-bit z-buffers? I thought it was because of the video memory such a buffer would take up, but since the newer graphics cards offer plenty of video memory, surely it would at least be an option now if that was the only problem.

Do the cards use some sort of optimization for comparison that depends on the size of each entry in the z-buffer? I would have thought a 32-bit comparison would be faster than a 24-bit one, since it’s a dword.

On nVidia cards, 24-bit Z-Buffer also gives 8 bits of stencil, so it becomes 32-bit aligned.

j

what about that extension ? http://oss.sgi.com/projects/ogl-sample/registry/EXT/wgl_depth_float.txt

According to delphi3D, no card supports it

CGameProgrammer, are you aware of any video card on the market that has a core design that actually started shipping less than ~3 years ago?

All we have to do is wait and good things will come to us.

My comments about scaling would not apply to a 32 bit float (but would to 32 bit int and maybe fixed point) since any messing around with the value would be bad for precision unless you get better than 32 bit from the vertex transformation. So you’d probably want to interpolate right off the coordinate and store it. A few extra bits in the evaluation of fragment depth becomes desirable. You end up deep in fp arithmetic precision issues which then have a greater effect on fragment z than the representation you choose to store. One principal should probably be modify the data as little as possible after the transform, but you have to consider the fragment evaluation arithmetic precision too.

One thing seems likely, fp 32 bit depth buffers will have limitations elsewhere and those limitations will be virtually impossible to know, without detailed knowledge of the hardware and more smarts than I have.