hi guys: i have a problem. in great need of your help:-(

i have an image as a 2D texture. now i want to perform rotation. its done by rotating the texture matrix. the image is of 12 bits, and the framebuffer is of 8 bits. while processing the rotation, opengl must be doing some internal computing. i want to know the precision of this computing. if i just read the pixel values from framebuffer, it will not work. because the values on framebuffer is already rounded off to 8 bits.

anybody has any idea that i can get the pixel values before they are transmitted to the framebuffer?

thanx a lot

alex

Reading 12 bits per channel from a 8 bit per channel display is obviously not preserving 12 bits of precision, you can ditch that idea.

It’s also not possible to get data from the internal pipeline before it has been written to a color buffer.

Assuming your hardware supports a texture format with 12 and more bits, e.g. GeForce 6 supports 16 bit integer or float textures.

Shift your 12 bit data in the highest 12 bits of a 16 bit integer. You might want to spread the color spectrum in the four lower bits.

Download that data as 16 bit integer texture.

Allocate a floating point pixel buffer with at least 16 bits per channel. Draw your rotated image, read it back as unsigned shorts and there you are. Higher precision than 8 bits per channel (colors are in the range of 0.0 to 1.0 and the mantissa of a 16 bit float is 10 bits).

For 23 bits of precision use a 32 bit per channel floating point buffer. This will give you enough precision for the 16 bit integer data.

hi Relic, thanx for your help:-)

sure with framebuffer of 8 bits, there must be something rounded off if i have a texture of 12 bits. what i am doing now, is mapping the hi byte onto the R chanel and lo byte onto the G chanel. then i just need 8 bits for R and 8 bits for G chanel. i read the frame buffer afterwards. then i get the 12 bits precision. that means the last bit of G chanel are not 0.

then i tried 16 bits texture. something confusing occured. for most valus from the G chanel, the last 4 bits are 0. only some pixel values are not with 0 on the last 4 bits. the texture are artificial, the probability for each pixel value is the same. can i draw the conclusion that the OpenGl internal computing on my graphics card is of 12 bits? 12 bits precision sounds really strange. if it is really of 12 bits precision, how can i explain the pixel values which are not 0 on the last 4 bits?

btw, my mframe buffer is actually of 10 bits.

do you think it is correct? really dont feel so comfortable with the 12 bits.

thanx a lot

alex

I cannot say why some of your pixels generate bits in the lower bits if I haven’t seen the OpenGL code.

What exactly do you want to achieve, that you care for the least significant four bits?

Maybe you misunderstood, with 16 bits integer texture I meant GL_ALPHA16 or GL_LUMINANCE16. Rendering this to a 32 bit per channel floating point p-buffer gives you plenty of precision.

hi Relic:

i am doing some image processiong stuffs. all the images we have are of 12 bits. that means the computation on the graphics card must be at least of 12bits. actually i am just testing this. in the future, may be i will have 16 bits images, so that was also checked.

from your experience, do you think 12 bits precision on graphics card is possible. normally the precision is supposed to be times of 8, right?

thanx again

alex

You should have enough precision if you migrate textures and framebuffers to 32 bit floats per channel.

The IEEE single precision float format uses 1 sign, 8 exp, 23 mantissa bits.

This will get you a precision of 23 bits in most color cases (range 0.0 to 1.0).