Rendering to a FBO values are clamped

I’m rendering to multiple render targets using a frame buffer and my values are being clamped from 0 to 1. Can anyone spot an issue or is there a state variable that is set incorrectly? Can I prevent that from happening?

I’m using a Quadro FX 1600M w/ driver version 191.66.


glBindTexture(GL_TEXTURE_2D, m_texture1);
glTexImage2D(GL_TEXTURE_2D, 0, 4,  width, height, 0, GL_RGBA, GL_FLOAT, 0);

glBindTexture(GL_TEXTURE_2D, m_texture2);
glTexImage2D(GL_TEXTURE_2D, 0, 4,  width, height, 0, GL_RGBA, GL_FLOAT, 0);

glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, m_name);

glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, m_pDepthBuffer->ID());

glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, texture1, 0);

glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT1_EXT, GL_TEXTURE_2D, texture2, 0);

glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);

You’ve requested internal format “4”. If you query what internal format was actually chosen (with glGetTexLevelParameter(…GL_TEXTURE_INTERNAL_FORMAT…), it will most likely be GL_RGBA8, so of course it is clamped.

If you want a float internal format you must request a float internal format.

Use GL_RGBA16F_ARB or GL_RGBA32F_ARB for internal format. These wont get clamped

Actually you need to call:
glClampColor(GL_CLAMP_FRAGMENT_COLOR, GL_FALSE)
glClampColor(GL_CLAMP_VERTEX_COLOR, GL_FALSE)
glClampColor(GL_CLAMP_READ_COLOR, GL_FALSE)

From my own experience, it doesn’t matter what your texture format is, the values will still get clamped if you don’t call the above functions.

That’s untrue. The default state will clamp color varyings output by the vertex stage, but it will not clamp fragment colors if the framebuffer is floating point.

See the documentation about GL_FIXED_ONLY.

no, it definitely is true (on nvidia drivers anyway). I know because I have experience of this too, doing HDR into a float fbo. What’s in the spec is neither here nor there.

What’s in the spec is very relevant. Because if the spec says that it should work, and NVIDIA’s drivers don’t allow it, then it’s a driver bug. One that should be reported to NVIDIA.

It should not be ignored just because it’s how NVIDIA does it.

you forgot to put the smilie symbol at the end of that, Alfonse, for you couldn’t be anything other than joking.

Everyone’s joking unless otherwise specified. Well, I can only speak for myself.

You sure you’re seeing a clamp and not something manifest of other conditions? Don’t own a Quadro but for what it’s worth MRT MSAA on a G80 is all gravy. I’m talking GL 3.2 core & the 195.62 drivers.

you forgot to put the smilie symbol at the end of that, Alfonse, for you couldn’t be anything other than joking.

Yes, you’re right. We should forget about anything except how NVIDIA drivers behave. Don’t bother filing that bug report; we’ll just rename OpenGL to NVIDIA_GL and complain to ATI about each and every difference from NVIDIA’s driver behavior. Maybe we’ll even get NVIDIA to publish an NVIDIA_GL specification.

And the only reason there is no smilie at the end of that is because this forum doesn’t have a proper sarcasm/rolleye smilie.

I acknowledge that NVIDIA’s OpenGL drivers are very good. But that doesn’t mean you accept driver bugs from them.

File a bug report at:

https://nvdeveloper.nvidia.com/

In my experience they’re very responsive to bugs filed using this form.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.