Using Framebuffer to RenderVideo

As an OpenGL newbie, I am working on a project that requires to render Multiple videos in OpenGL Window. Can I use FrameBuffer to store all video frames in it and then render? or, Is it possible to define a layer-based OpenGL Framebuffer on videos frame such as video 1 as layer 1, video 10 as a layer 10? My question is can framebuffer use as store multiple video frames and then render???

First, a framebuffer is something you render to. For storage, you’d use textures. Second, you aren’t going be able to store many frames of uncompressed video in video RAM. Unless the video is short, low-resolution and/or with a low frame rate, storing the entire video as a sequence of uncompressed frames isn’t a viable approach. If you’re decoding the video on the CPU, you’d typically just transfer the frames as needed. If you’re decoding on the GPU, you might store an entire slice (a sequence of frames starting and ending with an I-frame).

@GClements, Videos that I am using is less than 1 min, but the resolution is high. I am using FFmpeg to decode videos frames according to Timeframe of videos. Is there any alternative with a framebuffer to do video rendering ??

Create a pool of 4 textures, as each new frame is displayed, upload the next frame to the texture containing the previous frame. You can reduce the bandwidth/storage somewhat by using YCbCr 4:2:0 or 4:2:2 rather than raw RGB. There’s some support for 4:2:2 via extensions (SGIX_ycrcb, APPLE_ycbcr_422, MESA_ycbcr_texture), 4:2:0 could be implemented via shaders. But you probably aren’t going to fit a minute of video in video RAM; just do the math: width * height * bytes/pixel * frames/second * seconds.

You could fit a minute of compressed video in video RAM, but video decoding is complex and most of the earlier stages aren’t particularly GPU-friendly.

Consider using a library that will do the video decode on-the-fly on the GPU (using its on-board video decode hardware) and give you a pointer to the data in a GPU memory buffer such as an OpenGL texture to do whatever you want with. Then you don’t need to care about the total decoded size.

One such library (for NVIDIA GPUs) is:

Among others, FFMPEG has built-in support for NVIDIA GPU HW-accelerated decode and encode (LINK), so using its libraries might work for you as well.

1 Like

fair enough. I have NVIDIA GPU extract the video frames. The next step is if I wanna define layers for video, text, and graphics to just render in OpenGL is it possible? layer 1 is Video layer, layer 2 is text and layer 3 is image. 1001 layers : top is closest to the viewer, then 999, then downwards to 1, then bottom. Any content assigned to the top layer will play on top of all other currently running

I am using NAIDIA video codec to extract the frames from videos. When it come to render Frames, it will slow down rendering, If i there a layers that can separate video frames and other graphics object which can be much easier to render and play content on OpenGL application. How it possible?? 1001 layers : top is closest to the viewer, then 999, then downwards to 1, then bottom. Any content assigned to the top layer will play on top of all other currently running

Ok. Have you profiled it? What’s the primary bottleneck?

Do you see a slowdown with the OpenGL decode example in the SDK? If not, then I would look at the deltas between what you are doing and what the NVIDIA example code is doing for clues.

It’s unclear why you think this is not possible or difficult. Could you clarify?

I am sorry. I didn’t clarify at the beginning. Let me start from the beginning,
I am working on an OpenGL application which used for content rendering purposes only. So, in this application, there is three Major content that I am rendering, Graphics, Video, and Text. If you have an idea about Qlab.

I am working similar application like that but in OpenGL. I am trying to define layer function which implemented in Qlab. I am trying to do similar functions. But sometimes i can not explain very well. Sorry about that. I can share some code written for object rendering.

void SpriteRenderer::DrawSprite(Texture2D &texture, glm::vec2 position, glm::vec2 size, float rotate, glm::vec3 color)
{
    // prepare transformations
    this->shader.Use();
    glm::mat4 model = glm::mat4(1.0f);
    model = glm::translate(model, glm::vec3(position, 0.0f));  // first translate (transformations are: scale happens first, then rotation, and then final translation happens; reversed order)

    model = glm::translate(model, glm::vec3(0.5f * size.x, 0.5f * size.y, 0.0f)); // move origin of rotation to center of quad
    model = glm::rotate(model, glm::radians(rotate), glm::vec3(0.0f, 0.0f, 1.0f)); // then rotate
    model = glm::translate(model, glm::vec3(-0.5f * size.x, -0.5f * size.y, 0.0f)); // move origin back

    model = glm::scale(model, glm::vec3(size, 1.0f)); // last scale

    this->shader.SetMatrix4("model", model);

    // render textured quad
    this->shader.SetVector3f("spriteColor", color);

    glActiveTexture(GL_TEXTURE0);
    texture.Bind();

    glBindVertexArray(this->quadVAO);
    glDrawArrays(GL_TRIANGLES, 0, 6);
    glBindVertexArray(0);
}

to render Frame and my shader are

#version 330 core
in vec2 TexCoords;
out vec4 color;

uniform sampler2D sprite;
uniform vec3 spriteColor;

void main()
{
    
    color = vec4(spriteColor, 1.0) * texture(sprite, TexCoords);
}
#version 330 core
layout (location = 0) in vec4 vertex; // <vec2 position, vec2 texCoords>

out vec2 TexCoords;

uniform mat4 model;
// note that we're omitting the view matrix; the view never changes so we basically have an identity view matrix and can therefore omit it.
uniform mat4 projection;

void main()
{
    TexCoords = vertex.zw;
    gl_Position = projection * model * vec4(vertex.xy, 0.0, 1.0);
}

basically, i am following

to learn OpenGL. NVIDIA code example has a minor role in extract the frames. My major focus on rendering frames in Opengl as multiple surfaces. multiple layers or surfaces to render everything at a same time without getting any error. Like the video is playing background, image is top on that and text is also on top that image.

After looking in this link fast transfer of video frames into graphics card - #2 by scratt

I think I found the answer to how to implement multiple video rendering functionality.