Actually, I’m rendering to fp16 pbuffers and using them as textures, with rendertexture on wgl and copytexsubimage2d on linux.
I would have used ATI_texture_float, but I just could not get a floating point pbuffer created on linux without NV_float_buffer. I don’t want to have to render into a NV_float_buffer and copy it over to an ATI float texture for both windows and linux. Here’s some code which may hopefully make it more clear:
On windows, I did this:
int attr[] =
{
WGL_SUPPORT_OPENGL_ARB, TRUE,
WGL_DRAW_TO_PBUFFER_ARB, TRUE,
WGL_BIND_TO_TEXTURE_RGBA_ARB, TRUE,
WGL_PIXEL_TYPE_ARB,WGL_TYPE_RGBA_FLOAT_ATI,
WGL_RED_BITS_ARB, 16,
WGL_GREEN_BITS_ARB, 16,
WGL_BLUE_BITS_ARB, 16,
WGL_ALPHA_BITS_ARB, 16,
WGL_STENCIL_BITS_ARB, 8,
WGL_DEPTH_BITS_ARB, 24,
0
};
unsigned int count = 0;
int pixelFormat = 0;
float fattributes[1];
fattributes[0]=0;
wglChoosePixelFormatARB(hGLDC, (const int*)attr, fattributes, 1, &pixelFormat, &count);
int pAttrib[] =
{
WGL_PBUFFER_LARGEST_ARB, TRUE,
WGL_TEXTURE_FORMAT_ARB, GL_TEXTURE_RGBA_ARB,
WGL_TEXTURE_TARGET_ARB, WGL_TEXTURE_2D_ARB,
0,0
};
hPBuffer = wglCreatePbufferARB(hGLDC, pixelFormat, width, height, pAttrib);
On Linux, I’m not sure what to do. ???
I don’t see a GLX equivalent to “WGL_PIXEL_TYPE_ARB,WGL_TYPE_RGBA_FLOAT_ATI” so I had to use GLX_FLOAT_COMPONENTS_NV which requires TEXTURE_RECTANGLE_NV.
int attr[] =
{
GLX_RENDER_TYPE_SGIX, GLX_RGBA_BIT_SGIX, //GLX_RENDER_TYPE, GLX_RGBA_BIT,
GLX_DRAWABLE_TYPE_SGIX, GLX_PBUFFER_BIT_SGIX, //GLX_DRAWABLE_TYPE, GLX_PBUFFER_BIT,// | GLX_WINDOW_BIT,
GLX_FLOAT_COMPONENTS_NV, True,
GLX_RED_SIZE, 16,
GLX_GREEN_SIZE, 16,
GLX_BLUE_SIZE, 16,
GLX_ALPHA_SIZE, 16,
GLX_STENCIL_SIZE, 8,
GLX_DEPTH_SIZE, 24,
None
};
// define desired pbuffer attributes
int pAttrib[] =
{
GLX_PBUFFER_WIDTH, width,
GLX_PBUFFER_HEIGHT, height,
GLX_LARGEST_PBUFFER, True,
GLX_PRESERVED_CONTENTS, True,
None
};
So this boils down to the question:
Can you create a fp16 pbuffer on linux not using GLX_FLOAT_COMPONENTS_NV ?