Video Rending in OpenGL

Hi,

Am a avid graphics programmer.
I have designed a simple screen saver kind of application with a cube textured on all six faces and applied rotational & translational forces and made it move inside the screen boundaries… with screen boundary collision check.

Now an trying to rendering a video on all six faces (same video) instead of texture mapping.

How do i accomplish this?

I searched in web, but couldn’t find any tutorial associated with it.
Can you guys help me out??

Will be good if i complete this.
Waiting for your response

Thanks & Regards
mailmessb

You can use the library of your choice to decode the video and then upload each frame data in video memory through a pbo (pixel buffer object) which is the more elegant and performant solution IMO. Then you just have to map the texture to the cube faces as you already did.

A good introduction to PBO:
http://www.songho.ca/opengl/gl_pbo.html

Hi,

Thanks for the information.
Will try that and come back to u on the result achieved.

Thanks
mailmessb

Hi,

Read the tutorial at http://www.songho.ca/opengl/gl_pbo.html

The PBO buffer filling is not detailed.
As this concept is entirely new to me, am unable to get the full flow.

Can any one guide how to do that,
Filling PBO buffer with the content of the video that am targeting to get rendered?

Thanks
mailmessb

The PBO buffer filling is not detailed.

That’s because it’s simple; you use the same commands you use to fill any buffer object. You can use BufferSubData, or you can map the buffer and put the decompressed data directly in the mapped buffer. I wouldn’t suggest the latter unless you know for a fact that the decompression routine will not randomly write to the buffer (you should generally assume that mapped pointers should be filled sequentially).

Also, look at the texture streaming upload code example. Especially the displayCB() callback in main.cpp.