When the video data is used for texture mapping, as you aleady know, the size of video ( I mean the width and height) is not usually square and 2^n.
This question hit my head after watching the example of NeHe page.
I am now trying to display video data from camera in real time using texture mapping.
But the data for texture mapping must be square and consist of 2^n pixels in width and height.
How can i make it possible ?
Try using the GL_NV_texture_rectangle extension if you have an NVIDIA card.
If you need to display a 720x486 image, then you can pre-allocate a 1024x512 image using TexImage2D, and then use TexSubImage2D to update only the lower-left corner 720x486 pixels per frame.
Note that your U/V range then goes from 0 to 720/1024 and 0 to 486/512, with a black band outside it. And, if you use interpolation, 0 really is in the center of the outermost pixel and the texture border/color, so you’ll want to inset these coordinates by the width/height of half a texel.