32 bit Z-buffer

Hi,
On my 8800 GTX, i cannot get a framebuffer with 32 bit Z (with full acceleration, 32 RGBA, multisamples…)
Is it possible on more recent cards ?

Sorry, this is not a pure OpenGL question. It is rather nvidia specific.

Looks strange, I easily used float32 on G80+ hardware. I was creating FBO, bounding floating depth-buffer there, and everything worked very well.
May be, you are trying to create simple 32 bit integer Z-buffer, as for screen?

There is a difference between fixed-point 32-bit and floating-point 32-bit.

If I run glxinfo on Linux with my nVidia Quadro FX 3600M (matches the GeForce 8 series), the visuals or FBConfigs returned (used to create the default framebuffers) only have 24-bit for the depth buffer (fixed-point).

If you look at the texture format document of nVidia:

http://developer.nvidia.com/object/nv_ogl_texture_formats.html

you can see (at the beginning of the first page ) that format like DEPTH_COMPONENT_32 is actually implemented as DEPTH_COMPONENT_24 (D24).

At the end of the first page, a format like DEPTH_COMPONENT32F (32-bit floating-point values) is supported.

(My) conclusion: you cannot get a default framebuffer with a depth buffer of 32 bits (fixed-point or floating point) but you can build an framebuffer object (FBO) with a depth buffer of 32-bit floating point values.

I dont know about the 8800
but for prev nv40 cards the most they supported are 24 bit depthbuffers, if u ask for 32 theyll give you 16 or 24.

IIRC ATI OTOH (AOTA )have used 32bit depthbuffer for a long time.

so perhaps 32bit aint supported

Jackis : 32 bit Z is only supported on FBOs and not the framebuffer.

Is it still the case for GT2xx class hardware ?
could anybody with such a card tell me ?

zed : you seem to like acronyms :wink:

golem:
yes, I meant, that for now it’s still not possible to create ‘default’ render-buffer with 32-bit precision.
Even for FBO, you can’t create 32-bit fixed, but you can do 32-bit float.