Floating point textures and Mipmapping

Bit of a simple question here. In general, is mipmapping supported for floating point textures on modern desktop hardware?

I am using a mipmapped texture in a reduction operation to gather the average/max/min scene luminance for tone mapping. I use the standard code for custom generation of mipmaps along with a custom downsampling shader.

However, the operation fails on ATI hardware, with NaN’s cropping up at random in the mipmapped texture. It also appears that my rendering to the texture levels doesn’t work (I can’t overwrite the NaN’s).

However, if I change the data type back to GL_RGBA the results are sensible but at an unusuble precision.

The whole implementation works fine on NVidia hardware.

If requested I can post code.

I use the standard code for custom generation of mipmaps along with a custom downsampling shader.

Which “standard code for custom generation of mipmaps?” Are you using the old GL_GENERATE_MIPMAPS tag, or glGenerateMipmaps?

However, the operation fails on ATI hardware

Which ATI hardware?

The standard code I meant is usage example (5) of GL_EXT_FRAMEBUFFER_OBJECT. The code uses the OpenGL 3.3 core profile.

Short story is that, on initialisation, I use glGenerateMipmaps to ensure the mipmap chain is available/allocated. Then when I actually want to generate the mipmaps I use rendering-to-texture to fill the next smallest mipmap layer, reading from the previous. I need to use a custom shader for this and not glGenerateMipmaps as I want to store the log avg, the min and the max of the original texture values.

The AMD hardware I tested it on and had problems is the HD 6850. I confirmed the algorithm works on the Quadro 4000, the 560M and the GTX 260M. The platform is Linux (gentoo/ubuntu) and I’m using the latest binary drivers.

There’s no way to tell what’s going on without actually seeing what you’re doing. You’re not saying that any attempt to have mipmapped float textures doesn’t work. You’re saying that something isn’t working with the specific code you’re running.

We can’t really do anything without seeing what your code is specifically doing. Not just a description of the algorithm, but the shaders themselves.

If it’s ATI it may be an instance of this: http://www.opengl.org/wiki/Common_Mistakes#Automatic_mipmap_generation

Warning: It has been reported that on some ATI drivers, glGenerateMipmap(GL_TEXTURE_2D) has no effect unless you precede it with a call to glEnable(GL_TEXTURE_2D) in this particular case. Once again, to be clear, bind the texture, glEnable, then glGenerateMipmap. This is a bug and has been in the ATI drivers for a while. Perhaps by the time you read this, it will have been corrected. (glGenerateMipmap doesn’t work on ATI as of 2011)

Thanks for the heads up, I tried adding glEnable(GL_TEXTURE_2D); after my context was set up and it did not fix the problems I’m having.

I can appreciate that, but my question is more general than my code. Is mipmapping of floating point textures well supported on modern ATI hardware? This article on the wiki states that it is not available for x300 and above, but this article is quite old. I just want to know if the article is still correct.

But I’m happy to share my code and I’ll paste the relevant sections of it below, but with all of the boilerplate I don’t think I can make a minimal example. Full sources are available here.

I initialise the floating point buffer like so.

    colorTexture(new magnet::GL::Texture2D);

  colorTexture->init(_camera.getWidth(), _camera.getHeight(), GL_RGB16F);
  colorTexture->parameter(GL_TEXTURE_MAG_FILTER, GL_NEAREST);
  colorTexture->genMipmaps(); //Ensure the mipmap chain is built/available for writing into

  _luminanceBuffer.attachTexture(colorTexture, 0);

I then fill it with luminance data using the following shader

void main()
  vec4 color = texture(colorTex, screenCoord).rgba;
  float L = dot(color.rgb, vec3(0.265068,  0.67023428, 0.06409157));
  //Prevent negative logarithms
  L_out = vec4(log(max(10.0e-8, L)), L, L, 1.0);

I generate mipmaps using this code

//Now we need to generate the mipmaps containing the scene
//average, minimum and maximum luminances
magnet::GL::Texture2D& tex = *_luminanceBuffer.getColorTexture();
GLsizei currentWidth = tex.getWidth();
GLsizei currentHeight = tex.getHeight();
GLint numLevels = tex.calcMipmapLevels();

//Ensure the luminance buffer is both attached and its color
//texture bound

//Attach the mipmapping shader
_luminanceMipMapShader["inputTex"] = 0;
for (int i=1; i < numLevels; ++i)
	  GLsizei oldWidth = currentWidth;
	  GLsizei oldHeight = currentHeight;
	  //Halve the size of the textures, ensuring they never drop below 1
	  currentWidth /= 2; currentWidth += !currentWidth;
	  currentHeight /= 2; currentHeight += !currentHeight;
	  _glContext->setViewport(0, 0, currentWidth, currentHeight);

	  tex.parameter(GL_TEXTURE_BASE_LEVEL, i - 1);
	  tex.parameter(GL_TEXTURE_MAX_LEVEL, i - 1);
				    tex.getGLType(), tex.getGLHandle(), i);

	  //Now generate the mipmap level using a shader
	  std::tr1::array<GLfloat, 2> oldInvDimensions = {{1.0 / oldWidth, 
							   1.0 / oldHeight}};
	  _luminanceMipMapShader["oldInvDimensions"] = oldInvDimensions;
	  std::tr1::array<GLint,2> oldDimensions = {{oldWidth, oldHeight}};
	  _luminanceMipMapShader["oldDimensions"] = oldDimensions;

//Rebind mipmap 0 to the framebuffer
				tex.getGLType(), tex.getGLHandle(), 0);
_glContext->setViewport(0, 0, tex.getWidth(), tex.getHeight());

tex.parameter(GL_TEXTURE_BASE_LEVEL, 0);
tex.parameter(GL_TEXTURE_MAX_LEVEL, numLevels - 1);

And this shader

layout (location = 0) out vec4 L_out;
uniform sampler2D inputTex;
uniform ivec2 oldDimensions;
uniform vec2 oldInvDimensions;
uniform float downscale = 2.0;

vec3 data = vec3(0.0);
float divider = 0.0;

void combine(in vec4 sample)
  //If this is the first sample, just copy the min max values.
  if (divider == 0)
      data.r = 0.0;
      data.g = sample.g;
      data.b = sample.b;

  //Store the value for averaging
  data.r += sample.r;
  divider += 1.0;

  //Store the maximum value
  data.g = max(sample.g, data.g);

  //Store the maximum value
  data.b = min(sample.b, data.b);

vec4 output_frag()
  return vec4(data.r / divider, data.g, data.b, 1.0);

void main()
  vec2 oldPixelOrigin = (downscale * gl_FragCoord.xy - vec2(0.5, 0.5)) * oldInvDimensions;

  combine(textureOffset(inputTex, oldPixelOrigin, ivec2(0,0)));
  combine(textureOffset(inputTex, oldPixelOrigin, ivec2(0,1)));
  combine(textureOffset(inputTex, oldPixelOrigin, ivec2(1,0)));
  combine(textureOffset(inputTex, oldPixelOrigin, ivec2(1,1)));

  //Now determine if we need to add extra samples in case of
  //non-power of two textures
  bool extraXSamples = (2 * (int(gl_FragCoord.x) + 1) == oldDimensions.x - 1);
  bool extraYSamples = (2 * (int(gl_FragCoord.y) + 1) == oldDimensions.y - 1);
  if (extraXSamples)
      combine(textureOffset(inputTex, oldPixelOrigin, ivec2(2,0)));
      combine(textureOffset(inputTex, oldPixelOrigin, ivec2(2,1)));
  if (extraYSamples)
      combine(textureOffset(inputTex, oldPixelOrigin, ivec2(0,2)));
      combine(textureOffset(inputTex, oldPixelOrigin, ivec2(1,2)));
  if (extraXSamples && extraYSamples)
    combine(textureOffset(inputTex, oldPixelOrigin, ivec2(2,2)));

  L_out = output_frag();

This article on the wiki states that it is not available for x300 and above, but this article is quite old. I just want to know if the article is still correct.

The page has been given an enema. Accurate information can now be found there.

And yes, ATI hardware from the HD-era should be fine with floating-point mipmaps and filtering.

Right, thanks for the updates to the wiki page and the clarification. I’ll have to make a minimal example after all just to find out what I’m doing wrong.