No 32bit zBuffer anymore?

I just realized a very strange thing: on my GeForce4Ti it is not possible to ask for a 32bit zBuffer (even if stencil bits are specified to be 0) using wglChoosePixelFormatARB.

Strange.

Can you give a comment on that? I think I am doing something wrong.

the gf4 has never had 32bit z buffer, it operates in 16bit z, or 24bit z + 8 bit stencil. even if you requested that before you couldnt get anything higher than 24bit ( you just choosed pixelformat, you never checked the result)

Originally posted by Mazy:
the gf4 has never had 32bit z buffer, it operates in 16bit z, or 24bit z + 8 bit stencil.

This sucks!

even if you requested that before you couldnt get anything higher than 24bit ( you just choosed pixelformat, you never checked the result)

To tell the truth, I checked the result, this is why I found it strange and posted this.

EDIT: I forgot to thank you for the fast answer! Thank you!

[This message has been edited by Obli (edited 09-16-2003).]

Originally posted by Obli:[b]

[quote]Originally posted by Mazy:
the gf4 has never had 32bit z buffer, it operates in 16bit z, or 24bit z + 8 bit stencil.

This sucks!
[/b][/QUOTE]

No it doesnt.

Since z values are normalized between 0.0 and 1.0 anyway, 24 bits are just as precise as 32 bits (the others bits are only required for a sign and numbers > then 1.0).

Check the IEEE specs on floating point numbers.

Hi !

Well, that’s not correct, OpenGL does not use floating point values for depth values, they are saved as integer values, and the difference between 24 and 32 bit integer values is pretty huge (about 256 times).

Mikael

Originally posted by Honk:
No it doesnt.

If checking if that specific video card supports 32bit Z or not does not suck, then I guess you have a way to understand when to use 32bit z and when to not use it. Could you please tell me how to do that? I would be really interested.
Right now, I have a piece of code that goes for the maximum precision possible which is 32bits unless stencil is required. So, say 32bit Z is not supported. I lost 30minutes of work finding this out. If this does not suck, I really does not know what does, especially considering I have a 1ST generation PowerVR video accelerator with 32bit Z.

@mikael_aronsson

As far as I know, OpenGL doesnt care if its an integer or an float, thats an implementation specific detail.
I strongly believe that on all recent cards if have checked so far its a float (or 24 bit subfloat).

@Obli

Ahhh the PowerVR. Its an tile renderer which due to it concept dosent need a z buffer and hence doesnt have one. In fact a tile renderer always operates with zero overdraw, so you could specify what ever z buffer precision you want, you always get the same result.

In fact all z buffer operations on tile renderer (i.e. clear) are completly discarded by the driver. I am not sure how calls are handled that change the zbuffer equation (less, eqal, greater) I guess they only change the way the triangles get sorted inside a tile.

As for me, I always request a 24bit zbuffer + 8 bit stencil. Works for me…

Originally posted by Honk:
[b]Ahhh the PowerVR. Its an tile renderer which due to it concept dosent need a z buffer and hence doesnt have one. In fact a tile renderer always operates with zero overdraw, so you could specify what ever z buffer precision you want, you always get the same result.

In fact all z buffer operations on tile renderer (i.e. clear) are completly discarded by the driver. I am not sure how calls are handled that change the zbuffer equation (less, eqal, greater) I guess they only change the way the triangles get sorted inside a tile.[/b]

I am quite sure the NEC PowerVR PCX2 is not a tile renderer. I am speaking of a video card which was trashed by the VooDoo^2, do you remember it? It is not the Kyro.

As for me, I always request a 24bit zbuffer + 8 bit stencil. Works for me…

Well, it also work for me and I am fine with it 80% of the time. For the rest, I really would like to have my 32bit Z back - I won’t worry if I cannot but I still think not having it is bad.

Do the radeon supports 32bit Z? I was thinking about dropping nVidia.

I dont think any cosumercard up to date really have 32bit zbuffer… and for ATI, all their internal calclulations seems to be 24bit, so storing that in a 32bit zbuffer would be a waste of bits.

Originally posted by Obli:
I am quite sure the NEC PowerVR PCX2 is not a tile renderer. I am speaking of a video card which was trashed by the VooDoo^2, do you remember it? It is not the Kyro.

Yes, I remember that card pretty well (I found it extremly annoying that it only interpolated one color channel when you enabled Goraudshading).

Its a add-on card with no video connector (it transfered the tiles to the main videocard via PCI), it was slower then a voodoo1, the drivers made me pull my hair out and its a tile renderer (like all PowerVR chips are).

The SGL interface was uhhh interesting as well.

They often advertised the card having a 32bit zbuffer but that was just marketing hype. Since the card/driver splits the triangles in a tile with floating point precision its sort of having a 32bit floating point z buffer… bah! marketing bullsh!t…

Thrust me on that I have a HUGE collection of 3d cards

Originally posted by Obli:
[b]Well, it also work for me and I am fine with it 80% of the time. For the rest, I really would like to have my 32bit Z back - I won’t worry if I cannot but I still think not having it is bad.

Do the radeon supports 32bit Z? I was thinking about dropping nVidia.[/b]

No, unfortunatly all consumerlevel cards i know only support 24 bits (ATI, Nvidia,…). You can get around z fighting by adjusting the near and far clipping plane in most cases.

Well not entirely true…
Of course the Kyro’s have a ‘virtual’ 32 bit z buffer as they are tile based as well.

Originally posted by Mazy:
I dont think any cosumercard up to date really have 32bit zbuffer… and for ATI, all their internal calclulations seems to be 24bit, so storing that in a 32bit zbuffer would be a waste of bits.

This is interesting. I’ll consider doing a MAX(24, zBufferBits) or (even better) I’ll add a double-check procedure so if 32bit is not here, it will silently roll back to 24 or 24+8. I still think that’s bad but if everyone is doing it then I can only work with that. At least now I know it’s perfectly normal to have this kind of error.

As for the tile renderer if you are so sure it must be true, even if I had some quite different experiences… oh well, it was years ago, maybe I am not remembering right.

“Honk” is right…

All right, I still don’t believe this is possible, WOW!

Originally posted by Obli:
All right, I still don’t believe this is possible, WOW!

Exactly how I feel about current NVIDIA fiasco (read some beyond3d threads (that’s not the usual fanboys wars)). But it’s not exactly related to zbuffer.

I agree. They used to be good, they used to go fast, they used to…

they now bite the dust ^_^!
Smells like 3dfx (ack!)

OpenGL uses 32-bit float to compute the final depth value stored in the depth buffer.
The 32-bit float is composed of 1 bit of sign, 23 bit of mantissa and 8 bit of exponent, so 24 bit to store the final depth value is clearly sufficient (because the final depth of a pixel is between 0 and 1, so the exponent is equal to 0).
32 bit z-buffer is NOT needed. IT DOESNT SUCK. You don’t need 32 bit if your value is computed on 24-bit. And it works like this for NVIDIA, ATI and even the good old SGI Reality Engine.

Originally posted by tayo:
IT DOESNT SUCK.

Fine, there’s no problem with this, I understood completely but I still think a silent rollback would have been better since it’s the same thing.

As for the precision which is not lost, the point is not on precision, the point is that having a query which fails is bad and since this query failure somewhat disoriented me I wanted to post this. I never said “It sucks because it loses precision”, I just hate queries which fails since I’m lazy right now and I don’t want to add a double-check procedure - really there’s no problem with the precision, I understand the meaning of IEEE s1m23m8 (still thanks to everyone which replied, I learned something by this thread).

As soon as I will want to put some efforts on it again, I will do a double check, just in the (unlikely) case a day a vendor finds a way to use a 32bit at “full precision” (knowing half-float and the fx12, no one knows what will happen), but right now, I simply hate having to do it.

You know, the whole point of querying is to learn if a feature is available or not. The result of the query is not good or bad. It is merely information.

Originally posted by tayo:
[b]OpenGL uses 32-bit float to compute the final depth value stored in the depth buffer.
The 32-bit float is composed of 1 bit of sign, 23 bit of mantissa and 8 bit of exponent, so 24 bit to store the final depth value is clearly sufficient (because the final depth of a pixel is between 0 and 1, so the exponent is equal to 0).
32 bit z-buffer is NOT needed. IT DOESNT SUCK. You don’t need 32 bit if your value is computed on 24-bit. And it works like this for NVIDIA, ATI and even the good old SGI Reality Engine.

[/b]

Sorry, but this is nonsense on two seperate scales:
1)OpenGL does nowhere specify what data type shall be used for depth calculation. You stating it uses floats (perhaps even IEEE-754) is just wrong. An implementation can use everything it sees fit. After all, you can’t touch the depth buffer directly. You don’t have surface locks in OpenGL.

2)Even if z was a single precision float, just discarding the exponent and sign bits will not be an equivalent integer representation. If you believe it would, you should read up on floating point formats. Floats do use their exponent bits when constrained to the [0…1] range. There’s only one class of floating point numbers that can represent more than a power of two of magnitude range with a constant exponent: denormals. Single precision denormals are <= 2^(-127). That’s surely not what we’re talking about here.