more than 8 bits display

Hi everyone,
I’m quite new in openGL. I’m working on a video player able to manage 10bits frames.
Actually my app is able to convert a yuv frame into rgb but only considering 8 bits frames.
I would like to expand to to 10 bits frame (represented on 16 bits little endian).
The idea is to have the conversion in the shader:
16 bits yuv -> 8 bits rgb.
it is on this point that i’m blocked…
here is how it is working today for one component:


            glActiveTexture(GL_TEXTURE0); //select active texture unit
            glBindTexture(GL_TEXTURE_2D, id_y); //bind a named texture to a texturing target
            glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, size_plane[0][0], size_plane[0][1], 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, plane2Display[0]); //specify a two-dimensional texture image
            glUniform1i(textureUniformY, 0);//Specify the value of a uniform variable for the current program object

and here is the code of my shader:

varying vec2 textureOut;
uniform sampler2D tex_y;
uniform sampler2D tex_u;
uniform sampler2D tex_v;
void main(void)
{
    vec3 yuv;
    vec3 rgb;    
    yuv.x = texture2D(tex_y, textureOut).r;
    yuv.y = texture2D(tex_u, textureOut).r - 0.5;
    yuv.z = texture2D(tex_v, textureOut).r - 0.5;
    rgb = mat3( 1,       1,         1,
                0,       -0.39465,  2.03211,
                1.13983, -0.58060,  0) * yuv;    
    gl_FragColor = vec4(rgb, 1);
}

What I want to do is simply a shift+cast of my 16 bits value on 8 bits but I really do not get how the formats are managed…

thank you for your help!

Gabriel

For an 8-bit unsigned normalised texture (GL_R8), a byte with a value of 255 corresponds to a result of 1.0 from texture(). For a 16-bit unsigned normalised texture (GL_R16), a word with a value of 65535 corresponds to a result of 1.0 from texture(), while a word with a value of 1023 corresponds to a result of 1023/65535 ~= 0.0156 from texture(). So scale the results from texture() by 65535/1023 if the texture contains 10-bit values.

So only replace the “gl_FragColor = vec4(rgb, 1);” line by something like “gl_FragColor = vec4(rgb, 1) * 256;” for example ?
(or scale values by the 8bits to 16 bits factor into the 3x3 matrix stored in the rgb variable)

Conceptually, applying the scale directly to the value returned from texture() is easiest. If you don’t do that, you’ll also need to scale the -0.5 offset applied to U and V.

Hello,
Thank you, actually my shader was ok:

    yuv.x = texture2D(tex_y, textureOut).r*64.0;
    yuv.y = texture2D(tex_u, textureOut).r*64.0 - 0.5;
    yuv.z = texture2D(tex_v, textureOut).r*64.0 - 0.5;   

My problem was more how to manage the data, and it works now with :

glTexImage2D( GL_TEXTURE_2D , 0,  GL_R16 , size_plane[0][0], size_plane[0][1], 0,  GL_LUMINANCE , GL_UNSIGNED_SHORT, plane2Display[0]);

Thank you for your help!

What is the exact format of your initial data, it’s a packed format (alls components of the colour of a pixel are stored in the same continuous block of data ) or a planar format (each color component is stored in a distinct plane) ?

Or more something like the YUYV format or another format such as listed at https://linuxtv.org/downloads/v4l-dvb-apis/pixfmt.html ???

In first view, it’s seem to be a planar format because you use 3 distincts textures into your shader, cf. tex_y, tex_u and tex_v :


yuv.x = texture2D(tex_y, textureOut).r*64.0;
yuv.y = texture2D(tex_u, textureOut).r*64.0 - 0.5;
yuv.z = texture2D(tex_v, textureOut).r*64.0 - 0.5;

But something say me that you want more to handle something like the “V4L2_PIX_FMT_SBGGR10 (‘BG10’), — 10-bit Bayer formats expanded to 16 bits” …


I would like to expand to to 10 bits frame (represented on 16 bits little endian).