True 10-bit frame buffer ... ?

Is there other card support true 10-bit per color channel frame buffer besides Matrox’s Parhelia?

I’m trying to do some real 10-bit rendering in OpenGL, but I found the frame buffer of my NV30 card operate in 8-bit mode.

Although I can create a 16-bit per channel (64-bit per color) P-Buffer and render to that off-screen buffer, I have a lot of trouble to retrieve the result back from the P-Buffer. Has anybody used 16-bit P-Buffer before?

Any thoughts will be greatly appreciated.

I use them all the time in my HDR renderer.

I use glCopyPixels and wglMakeContextCurrentARB to copy the pbuffer to the frame buffer (the card will do the conversion from 64bit to 32bit).

Another way to do it (since nVidia’s cards don’t support WGL_ARB_make_current_read) is to use the render texture extension and that will automatically use a 64bit texture (16bit/channel fixed point, unless you use a floating point pbuffer) or whatever depending on the pixel format used. Then just draw a full screen quad and map that texture on to it.

Thank you NitroGL.

I’m using NV card so I can’t use wglMakeContextCurrentARB, but I failed with several solutions including wglShareLists, ARB_render_texture and NV_render_texture_rectangle.

The weirdest thing I can’t understand is: even the most basic glReadPixels does not work as I expected with P-Buffer:

//
wglMakeCurrent(pbuffer.hDC, pbuffer.hRC);
glDrawPixels(width, height, GL_RGB, GL_UNSIGNED_SHORT, pbuf);
memset(pbuf, 0, bufsize);
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_SHORT, pbuf);
wglMakeCurrent(screen.hDC, screen.hRC);

// glReadPixels can retrieve the content of
// P-Buffer successfully if the content is
// rendered using glDrawPixels

but,
//
wglMakeCurrent(pbuffer.hDC, pbuffer.hRC);

glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, L_UNSIGNED_SHORT, pbuf);
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS);

glEnd();
glDisable(GL_TEXTURE_2D);
memset(pbuf, 0, bufsize);
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_SHORT, pbuf);
wglMakeCurrent(screen.hDC, screen.hRC);

// Simply replace glDrawPixels with
// glTexImage2D will make glReadPixels fail
// to retrieve the content of P-Buffer.
// !!!

Any comments?

You might want to be more specific on how it doesn’t work. I could imagine the texture you attempted to render with was inconsistent, so texturing got diasbled. (This typically leads to reading back white) Also, I don’t see what your current color or texture environment mode is set to.

Finally, you probably want to provide a sized internal format to the driver. Specifying GL_RGB16 will make it more likely that the texture format selected will have enough bits to accurately represent the data.

-Evan

Thank you ehart.

The problem I met with 16-bit P-Buffer rendered with glTexImage2D is:
glReadPixels failed with error code 0x502 (invalid operation).

If I remove the line wglMakeCurrent(pbuffer.hDC, pbuffer.hRC);
the glTexImage2D code works perfectly with 16-bit frame buffer (besides the problem of glReadPixels can only retrieve 8-bit meaningful data from frame buffer).

I tried GL_RGB16 with glTexImage2D but the problem is the same:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16, width, height, 0, GL_RGB, L_UNSIGNED_SHORT, pbuf);

If the P-Buffer has the same color depth as the frame buffer (8-bit) and the same glTexImage2D code was used, glReadPixels returns successfully but nothing was actually read.

Originally posted by shadow9:
If I remove the line wglMakeCurrent(pbuffer.hDC, pbuffer.hRC);
the glTexImage2D code works perfectly with 16-bit frame buffer

I’m a little confused about your formats, correct me if I’m wrong. You use/try to use a 16bit/channel floating point pbuffer, with 16bit/channel fixed point textures. And you also work with a 16bit(5/6/5) framebuffer?

(besides the problem of glReadPixels can only retrieve 8-bit meaningful data from frame buffer).

8-bit as in one channel is correct and the two others are garbage?

Hi roffe,

I’m just using 16-bit/channel P-Buffer, I don’t find any parameter to specify float point or fix point while creating the P-Buffer. My frame buffer is 32-bit (8-bit/channel RGBA).

I think I can only retrieve 8-bit meaningful data from the frame buffer, because:
I draw a pixel with 16-bit/channel color (0x17c0, 0x1a00, 0x1a40) to the frame buffer, then read it back, the color was changed to (0x1818, 0x1a1a, 0x1a1a). It seems the color was rounded to the higher byte and then lower byte was set to the same as the higher byte.

Thanks.

Originally posted by shadow9:
[b]Hi roffe,

I’m just using 16-bit/channel P-Buffer, I don’t find any parameter to specify float point or fix point while creating the P-Buffer. [/b]

If I’m not mistaken, NVIDIA cards only support floating point 16bit or 32bit/channel
pbuffers. For fixed point it is 8bit/channel max. If you created a fp pbuffer you would know about it, unless you are using someone elses code. Post your pbuffer creation code so we can look at it.

If it really was the readpixels that failed, then I would suggest checking the pbuffer size and making sure you aren’t trying to read back more that the size of the allocated buffer.

As far as formats go, I don’t know what the latest NVIDIA cards do with allocating a pbuffer with the color depth set to 16 bits per channel. You probably ought to query the buffer for how many it has to make sure you are getting what you think you are.

-Evan

You are right, roffe. NVIDIA cards only support floating point 16bit or 32bit/channel pbuffers.

Previously, I just enumerated all the pbuffer formats and use wglGetPixelFormatAttribivARB to find the pixel format index I want. When I later switched to use wglChoosePixelFormatARB, I found the problem: WGL_FLOAT_COMPONENTS_NV must be set to TRUE in order to find a pixel format with 16-bit or more/channel.

According to the spec of NV_float_buffer:
<<
INVALID_OPERATION is generated by Begin, DrawPixels, Bitmap, CopyPixels,
or a command that performs an explicit Begin if the color buffer has a
floating-point RGBA format and FRAGMENT_PROGRAM_NV is disabled.
<<
Is that means I must enable FRAGMENT_PROGRAM_NV and use pixel shaders with the 16-bit P-Buffer? Is there any sample using the float buffer?

Thanks!

Originally posted by shadow9:
Is that means I must enable FRAGMENT_PROGRAM_NV and use pixel shaders with the 16-bit P-Buffer?

Yes

[b]

Is there any sample using the float buffer?[/b]

Try NVIDIA’s developer site. http://cvs1.nvidia.com/DEMOS/OpenGL/src/

This guy also has a pbuffer class that handles most of the pbuffer setup. http://www.cs.unc.edu/~harrism/misc/rendertexture.html

For shader/cg related questions regarding NVIDIA hw, be sure to check out http://www.cgshaders.org

Thank you roffe!

It’s my first time to ask question here and I got immediate feeadbacks and 'm very satisfied.