Here’s a short description of my problem :
I’m developing a viewer for video-sequences. I use a scan-line based approach with concurrent computation (fetching) for each row. Every row I get is separated in severals buffer (Channels) , namely R,G,B,A.
So imagine the problem as: I want to combine all my channels into one texture. I have thought of doing it my-self, by allocating a temporary buffer in which I can upload the severals component, and give it to openGL with glTexSubImage2D(…), but the solution is very expensive…especially for image streaming!
Then I thought of multi-texturing: in short, make a separate texture (GL_LUMINANCE?) for every channel, and then combine it with a GLSL shader. I think this could work, unfortunately I have no experience in multi-texturing. Could somebody give me a link to a complete tutorial for multi-texturing please?
The last thing I could dig out of the internet is using a FBO, with a buffer texture. In the buffer texture, use a buffer object with several indexes. But I couldn’t get much more details about indexes in a buffer object, and if it would cater my needs.
Anyway, here is the final question : Is it possible to create one texture from 4 separate component (buffers) ? Or more precisely : How to draw 4 combined buffers, efficiently , in a context of video sequences?
Thanks in advance,
Could somebody give me a link to a complete tutorial for multi-texturing please?
For your exercise you only need to supply one set texture coordinates tothe shader so any texture rendering tutorial will do for that like
But you will load 4 textures (probably on GL_TEXTURE0, GL_TEXTURE1, GL_TEXTURE2, GL_TEXTURE3) to 4 different samplers. Then fetch from each of these samplers with the same texture u,v and combine the red (.r) components from each (assuming you create textures with say an internal format of GL_RED).
I cannot say if OpenGL would be quicker than just combining on the cpu given you have to copy the textures up to the graphics card for each frame.
You may also have a look at texture streaming. I have not used this but I believe it is designed for your task type.
Hey, thanks tonyo_au for your answer.
Indeed that’s the path I have taken since yesterday , I load 4 different textures in the fragment shader and combine the .red of each texture (except alpha texture) and multiply it by the alpha, as such:
color= ra + ga +b*a
I have read a little while around the internet about texture streaming, and apparently I could upload the texture on the GPU while avoiding the call to glTexSubImage2d().
I also have a question, since every time I compute a row for one frame I’m calling glTexSubImage2d ( or whatever the sequence of calls it would be if I use the texture streaming technique) ,
would it be possible to draw every time I compute a row ? It would sort of fill-up the window until the texture is not full, wouldn’t it?
Otherwise I think I would fall back on a drawing coupled with a timer to respect a frame-rate of 25 fps
Again thanks for your answer
If you want to stream, then look into PBO (pixel buffer object). It does use glTexSubImage2D. You just need to bind the PBO before you call glTexSubImage2D. It would help hide the latency of the driver actually uploading the texture to VRAM.