Floating point fbo ping pong

I needed the accuracy of a floating point fbo when I do ping ponging with opengl – otherwise there were some strange patterns forming. And it works on my Radeon HD 2600 XT and NVidia GF 8800 GT. Ping pongs about six times. ssao, bloom and depth blur.

Anyhow, I run the exact same thing on the GeForce 7300 GT. I check for extensions like “GL_ARB_texture_float”. But the card indicates it can handle it. I only use 4 color attachments and I do not think this violates GL_MAX_COLOR_ATTACHMENTS_EXT which is 4. Also, I checked GL_MAX_TEXTURE_SIZE and the dimensions of the buffer are less than 4096. The depth is GL_DEPTH_COMPONENT24 after checking with glGetIntegerv on GL_DEPTH_BITS.

I chose an internal format of GL_RGBA16F_ARB – is there a way to check support for this?

So, it passes all these checks. Yet, the moment I start to write with my shader into the fbo, glBegin(GL_POLYGON) doing the ping pong, OpenGL Profiler indicates that the program has reverted to software mode and the program runs creepy slow. :slight_smile: Unlike the two other cards.

Think the Radeon 9800 also says it can do it too.

I suppose its about turning this feature off, if the card cant do it.

To answer your question, glGetTexLevelParameter(…GL_TEXTURE_INTERNAL_FORMAT…) will tell you if RGBA16F is natively supported.

As for the rest of your post, what makes you think your fallback has anything to do with your render target setup? Shaders can fall back for many reasons.

And, for your last comment-- no, it’s not about turning off features that can’t be completely supported in hardware. If that was the policy, nobody could export GLSL at all. Or for that matter, many other basic features, like multitexture or VBOs. Software fallback is a necessary evil to be OpenGL compliant.

I followed your suggestion and tested the internal texture format with:

glGetTexLevelParameteriv(GL_TEXTURE_RECTANGLE_ARB,0,GL_TEXTURE_INTERNAL_FORMAT,&test_format);

The only differences it detected was if the program passed in a GL_RGBA it returned a GL_RGBA8. Otherwise, it returned GL_RGBA16F_ARB in the floating point attachments.

All of the glsl scripts compiled and linked properly. Though, I’m starting to suspect it might be a texture indirection issue. I had a similar problem when shadow mapping. Before, I had to put the texture look up into subroutines to get around the issue.

I found the issue. In my ssao glsl script there is a loop with a count of 32. That is not going to work too well on some cards. When I take that glsl script out then everything is hardware accelerated again. :slight_smile:

There is so much more performance on high end cards these days…

Glad that you figured it out.

But, while the newer cards are more capable, they still have their limits. All hardware limits can be exceeded if you write a big enough shader (or simply use the noise() function.) So we still need software fallback.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.