Luminance alpha float16 on mac not working?

Hello,

I’m on Mac OS X 10.6.3 with a geforce GTX260 (but problem seems the same on other macs too) : I’m using textures with format GL_LUMINANCE_ALPHA16F_ARB and they don’t seem to work (they don’t retain anything, or show noise). If I change the format to GL_RGBA16F_ARB, it works.

Is there a workaround, trick, apple specific LA16F ?

Thanks

This format works correctly for me on all ATI and NV renderers, in 10.6.x.

Can you post some code demonstrating the problem?

Here’s the code that causes an error:

glGenFramebuffersEXT(1, &patternfbo);
glGenTextures(1,&patterntex);
glBindTexture(GL_TEXTURE_2D, patterntex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA16F_ARB, width, height, 0, GL_RGBA, GL_FLOAT, 0);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, patternfbo);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,GL_COLOR_ATTACHMENT0_EXT,GL_TEXTURE_2D,patterntex,0);
glCheckFramebufferStatusEXT(GL_FRAMEBUFFER); [b]//return GL_FRAMEBUFFER_UNSUPPORTED on Mac OS X 10.6.3 ![/b]

Now if I replace GL_LUMINANCE_ALPHA16F_ARB by GL_RGBA16F_ARB, the FBO is created with no problem.
I don’t have this problem on Windows.

I found that by replacing with GL_RG16F it works. Why doesn’t it work with LA16F ? Is RG16F compatible with ATI-based macs ?

It wasn’t clear from your first post that you were talking about rendering, and not sampling.

Rendering to L/LA/A/I formats is not supported by any Mac renderer. These formats weren’t allowed by EXT_fbo. They are allowed by ARB_fbo, but not mandated. They’re deprecated in GL3.

R/RG formats are allowed by ARB_fbo, and mandated by GL3. They are renderable on all Mac renderers that export ARB_texture_rg.

I’m also using LA16 (unsigned int) : should I render to RG16UI ? This would be a problem, since they are only supported since G80 (http://developer.download.nvidia.com/opengl/texture_formats/nv_ogl_texture_formats.pdf) and I want my code to run on older GPU…

Actually LA16 (unsigned int) is not supported in hardware by earlier GPU generations. Integer textures are supported from SM4.0 level hardware (i.e. G80).
OpenGL advertises some texture formats since GL1.0 that are not hardware accelerated. LA16 is one of those.

Oh, ok. So I’ll stick with RG16F. This format is supported since NV40 so that’s ok for me, but do you know about ATI ? I’m a bit worried since I don’t see any reference to RG16F in Appendix G of this document:
ATI OpenGL Programming and Optimization Guide

As I know, NVIDIA supports RG16F only since G80 (and it is mentioned so also in the texture format support list you’ve linked).
Actually ATI is the one that supports GL_ARB_texture_rg from an earlier generation, namely since the ATI X1xxxx series.

Indeed NVIDIA has support for RG16F only since G80, but looking at the table they also have support for FLOAT_RG16 since NV40 (NV_float_buffer), which looks to be the same to me ?

True, but then you must be careful to use the proper texture format for each generation.

  1. From GeForce 6000 (NV40) those that support NV_float_buffer, use FLOAT_RG16_NV.
  2. From Radeon 9550 those that support ATI_texture_float, use LUMINANCE_ALPHA_FLOAT16_ATI
  3. From GeForce 8000 (G80) or Radeon X1000 those that support ARB_texture_rg, use the RG16F format

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.