How to do color conversion on decoded video without downloading from GPU memory

Decoding video will result in different pixel formats on different GPUs and drivers. I have an application which can render (using OpenGL), only the RGB8 pixel format, therefore I need to do color conversion from the decoded pixel format to RGB8. I can access the gl… functions in this program but I cannot edit the shader, so I’m stuck with RGB8.

This ffmpeg example demonstrates how to do hardware decoding: FFmpeg/hw_decode.c at release/4.2 · FFmpeg/FFmpeg · GitHub

At line 109 it does this:

/* retrieve data from GPU to CPU */
            if ((ret = av_hwframe_transfer_data(sw_frame, frame, 0)) < 0) {

I want to avoid this because it takes time. Therefore, I need a way to reuse that decoded video, which is in GPU memory, to redo color conversion. So I want to use the decoded video that is on the GPU to do color conversion still in the GPU, and render it then.

Is it possible to create a shader program that takes input from the GPU memory, not CPU memory? If I want to pass image from CPU to my shader, I simply use glTexImage2D. Is there a similar way to do it but from GPU memory?

Can I do even deeper and render the color decoded video on screen without copying from GPU memory too?

If what I wanna do is possible, are there shader programs available to color convert pixels for some major graphics cards/drivers? I don’t want to support all of them, but I’d like to support at least the Jetson Nano board from NVIDIA.

If a buffer object is bound to GL_PIXEL_UNPACK_BUFFER, glTexImage2D (and other glTexImage* and glTexSubImage* commands) reads data from the buffer object rather than client memory. In that case, the data argument is treated as an offset into the buffer.

In theory, you can copy the raw MPEG data to GPU memory and do all of the decoding on the GPU. But the initial decompression isn’t something that a GPU is really suited to. But the inverse-DCT and YCrCb to RGB conversions are quite well suited to the GPU. The colour-space conversion can reasonably be done as part of the rendering process rather than storing a converted texture, but the inverse-DCT really needs to be done as a separate step.

Extensions such as SGIX_ycrcb, APPLE_ycbcr_422 and MESA_ycbcr_texture allow you to upload YCrCb-422 textures and which return RGB when sampled, although I don’t know if those extensions are commonly supported on modern hardware.

Thank you for your answer. If I understood correctly, a buffer object is a large data object that I pass from CPU to GPU and then I can simply force glTexSubImage to get data from this buffer instead of from CPU. This is great. However, I still have to find a way to convert the decoded video that ffmpeg stored on the GPU to a buffer object in Open GL. I mean, the decoded video is in GPU, but how do I make it behave like a buffer object in Open GL?

A buffer object is a block of GPU memory. You can transfer data between CPU memory and a buffer with explicit function calls (glBufferSubData, glGetBufferSubData) or by mapping the buffer into the process’ address space (glMapBuffer, glMapBufferRange).

That I don’t know. You’ll probably be better off asking on a forum dedicated to FFmpeg.