I’m trying to render live video to an openGL surface, the current way it has been done is to use DirectShow to grab the frames and use glTexImage2D and glTexSubImage2D thereafter to create the textures to display. Is there are faster way to do this? are there any faster alternatives to glTexImage2D and glTexSubImage2D? and finally, I’m assuming that these two functions are using the GPU on the ASUS card rather than the PC’s CPU?
Note that texture rectangle coordinates aren’t normalized, to render a 768x576 rectangle texture, you wouldn’t use [0…1] in x/y, but instead use [0…768] in x and [0…576] in y.
Rect textures are potentially faster for highly dynamic data because the memory layout needn’t be swizzled around for optimum mipmap access and … um … these things. You get linear transfers.
You also don’t need padding regions to hit the next power-of-two dimension. That makes them very easy to work with.
Just about to change the code, and then realised it’s virtually identical to yours.
So, it seems that this is possibly the fastest it’s going to be.
It’s funny really, because the MS DirectShow capture example in the DXSDK can render live video (i.e captured video) in a window with the CPU running between 0% and 4%. But then when using glTexImage2D and glTexSubImage2D with the GL_TEXTURE_RECTANGLE_NV extension to convert the capture data to an openGL texture in order to use it in my openGL app, the CPU averages 95%!!! Admittedly it’s doing a lot of work, but this is where I assumed the GPU in the ASUS card would do the work, and the CPU would be doing virtually nothing.