Optimal rendering of video as texture

I have an application which is being ported from XNA to opengl. It renders quicktime moves as textures which are overlaid on various objects.
It does this by importing the movie and creating as many textures as it can, where each texture holds a frame. When rendering the scene, the objects change textures whenever there should be a new frame.

My first question is if there might be a more elegant solution to this in opengl.
Second question is how you, in opengl, ensure that a created texture resides on the devices texture memory and does not exist in system memory. I am not even sure is this is an issue in opengl, but in directx you can create n textures while the devica may only hold n-x textures. In such a situation the textures are swapped to and from the device by the api.

You cannot ensure that textures are placed in the device memory. You can provide hints I believe, but hints ensure nothing.

I don’t have any experience showing videos in OpenGL, but personally I would try an approach in which I upload a couple of frames (say 5 for example) in textures. Whenever a frame was shown, load a new frame in that texture. This way you have a small buffer (you might have to do some tests to see how many frames this buffer must be for smooth performance, perhaps even 2 frames is enough).

For streaming upload of video frames to texture use PixelBufferObjects(PBO):
http://www.songho.ca/opengl/gl_pbo.html

Otherwise i would use a ringbuffer with a few textures as Heiko suggested => this ensures the texture can be uploaded in the background before it is used, to avoid stalls.

First, I am sorry for not reacting sooner. I was suddenly being bussy elsewhere.

I agree that a ring buffer using pbuffer seem to be the correct way of doing this without bogging down the cpu.

I do wonder however what is wrong with simply creating a texture id for each movie frame and bind different id as time progresses. In my tests I have yet to run out of new textures, and the driver simply swaps in the texture data when needed.
How does this differ from if I moved the pixels to device memory myself?