I am using OpenGL to convert YUV to RGB. I don’t use Xv because it has some pretty awful tearing artifacts, and I need to be able to run a lot of video through my application.
I am wondering if there’s a way I can convince my Linux implementation to allow me to use a 1 component texture. I don’t want this deal where OpenGL fans out the values, for example,
GL_RED { R }
becomes
GL_RGBA { R 0 0 1 }
This conversion happens on the “CPU side” of the bus, the number and size of textures I am pushing each frame can be prohibitively expensive.
I have full control over my application’s environment, so any X setting adjustments or tricks are perfectly acceptable.
Look at NVIDIA developer site - you should easily find a document with list of texture formats on NVDIA GPU’s. 1-component textures are supported if you use ALPHA8 or LUMINANCE8 you’ll get -bit texture.
You can probably find similar document for ATI GPU’s.
Don’t use 1 component textures. It is not efficient.
I guess you have YCrCb 4:2:2 in 8 bits.
Then you can load one pixel as GL_LUMINANCE_ALPHA
pixel format. Then your fragment program
can convert this to 4:4:4 followed by simple
matrix multiplication to have RGB.
Then think about using GL_RGBA for storing 2 YUV pixels in one RGBA pixel (your homework).
If you do it this way you can convert several HD1080 streams in realtime.