jogl float pbuffers and nvidia cards

Im programming a little mandelbrot/julia set renderer that is using two floating point pbuffers to do some ping pong rendering. The whole thing is implemented in java using the newest version of jogl to access opengl. My problem is, that on ATI hardware everything runs fine but on NVidia hardware (tested on an geforcefx5950 and 5800) i get an error when i try to create my float pbuffers (jogl reports that it cant find a matching pixelformat…) here the initialization code:

GLCapabilities cap=new GLCapabilities();



Im not quite sure if i set all the attributes correctly…

oh and on a side note… is it possible that killing a fragment in an fp when rendering into a floating point pbuffer that has a zbuffer and ztest enabled still results in writing the z value anyways? Im asking this because i tried doing some early z rejection optimisations to the renderer, but i simply wasnt able to find a way to prevent z writes on a per fragment basis (alpha test seems to slow everything down as if it was in software and manually setting the result.depth didnt work as well…)

The GeForce FX 5800/5950 cards only support floating point buffers using the GL_NV_float_buffer extension. When calling wglChoosePixelFormatARB you need to make sure WGL_FLOAT_COMPONENTS_NV is in the attribute list.

I’m not familiar with jogl but there might be a GLCapabilities.setFloatComponentsNV(bool) function you can use to set this. If not you’ll need to add it or create the pixel format manually.

Keep in mind that you can only use the TEXTURE_RECTANGLE texture target for floating point textures on GeForce FX 5xxx GPUs. This means no mipmapping, no filtering, and your texture coordinates must be in the [0…texture width]x[0…texture height] range.

ah thanks for the fast reply. ok that means my little program simply wont run on nv3x hardware since jogl doesnt supply any additional functions to set pixel format parameters… oh well :slight_smile: