I’m trying to stream video using OpenGL textures. Some features of my programming architecture are:
[li] Direct GPU memory access using PBOs - works great [/li][li] I am using a stack of pre-reserved PBO that are being recycled - works great [/li][li] Textures present LUMA and CHROMA planes that given to a shader program. Shader interpolates from YUV to RGB - works nicely [/li][/ul]
However, there is an bottleneck that is driving me nuts when I’m copying from PBO to textures.
I have traced this issue into the format/internal_format pair in
glTexImage2D(GL_TEXTURE_2D, 0, internal_format, w, h, 0, format, GL_UNSIGNED_BYTE, 0)
As we know, OpenGL converts everything to RGBA. The documentation states that:
glTexImage2D - OpenGL 4 Reference Pages : “GL_RED : each element is a single red component. The GL converts it to floating point and assembles it into an RGBA element by attaching 0 for green etc.”
My problem is that I don’t want RGB-whatever, just single channel …! The “texture” here is just a blob of bytes that is given to the shader program. I don’t need any conversion.
The only format/internal_format pair that gives me decent results, is GL_RGBA/GL_RGBA (but I can’t use that).
My case would be GL_RED/GL_RED … but that sucks big time. With that pair, “glTexSubImage2D” is hundred times slower than with GL_RGBA/GL_RGBA.
So, I tried to drop PBOs alltogether and start using TBOs (texture buffer objects) … with TBOs there is no conversion - they are more like plain byte buffers, right?
However, TBOs don’t give be dma to the GPU (this works with PBOs). This one:
payload = (GLubyte*)glMapBuffer(GL_TEXTURE_BUFFER, GL_WRITE_ONLY)
gives me a null pointer.
I am starting to run out of ideas - please help!
A small stand-alone test program can be found here … it just benchmarks texture uploading (does not visualize anything):
P. S. A related stack overflow question is here: c++ - Streaming several (YUV) videos using OpenGL - Stack Overflow