I’m using this format for faked HDR on SM2.0 cards.
10-bit precision gives 0-1023 color range so I render everything 4 times darker, and multiply color by 4 in final stage.
Unfortunately this leads to loss of color precision. I get the same precision when using GL_RGB8 format.
Hardware: both GeForce7800GT and RadeonX850.
Yeah, I have this pdf, too. That explains why GeForce loses precision, but I’m more interested in Radeons since on GeForce I have true HDR supported.
I’ve read ATI’s ‘Toy Shop’ demo description. They use RGB10_A2 format for HDR, so I guess Radeon X1k supports this format.
Well, I just leave it at that - my application will use RGB10 format and if GPU supports it color precision will be fine.
My Radeon X800 XT seems to support RGB10_A2 textures without precision loss.
Creating a texture with this format and then querying the actual component sizes (GL_TEXTURE_RED_SIZE, …) with glGetTexLevelParameter yields 10 for rgb and 2 for alpha.
The question is if you can bind that as render target and render to it in this format.
Binding the RGB10_A2 texture as a render target worked as well.
glCheckFramebufferStatusEXT returns GL_FRAMEBUFFER_COMPLETE_EXT.
It even returns GL_FRAMEBUFFER_COMPLETE_EXT for RGBA16 textures.