glCopyTexImage2d limited to viewport?

I am trying to implement environment mapping. For now, I have a quad in the middle of my environment (4 walls and a floor) and I want to texture map the part of the wall that is facing the quad to the quad. Eventually, I hope to create a disco ball which will be made up of several quads that reflects the environment in this way.

Having done A LOT of online research, it seems that the function glCopyTexImage2d is a good function to use and I’ve actually got it working… somewhat.

The problem I am having is that glCopyTexImage2d seems to be limited to the current scene within the viewport.

So a call like,

glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 0, 0, 128, 64, 0);

will map the bottom left of the screen.

However, the camera will always be inside of my environment so it won’t see the wall behind it which is the wall that should be mapped to my quad.

How do I capture the area of the back wall that needs to be mapped to the quad when glCopyTexImage2D only copies from the current scene?

I’ve also tried this,


gluLookAt(x, y, z,		// middle of quad
	  x, y, z+1,		// look along +z axis (for now)
	  0.0, 1.0, 0.0);	// up vector

glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 0, 0, 128, 64, 0);

expecting to at least get the bottom left 128x64 portion of the screen from this view but I still only get the view from the current scene? I’ve provided an image.

Any suggestions? Perhaps I should use a different function?

Think about it this way: The image that you see in the mirror is the same as if you were on the opposite side of the mirror looking through a glass window.
So you need to render your scene from that point of view, store it in a texture and then render the scene from the users point of view using that texture on the mirror quad.
You may want to look into using FrameBufferObjects (FBO) for rendering the mirror texture instead of using the back buffer and glCopyTexImage.

However, you mentioned environment mapping. This is often done with a static texture (not updated at runtime) containing all of the scene (using some form of spherical mapping or a cube texture) and just do lookups into that texture based on the user position. This is often good enough if the reflecting object is small compared to the reflected environment and much faster since you don’t need to render the scene multiple times.

I originally applied a cubemap of the environment to my discoball, and with the help of GLSL it worked…

However, the texture needs to be dynamic because the discoball spins. That or the texture needs to be static while the discoball spins…

I’m lost about what to do at this point…

For the fixed function pipeline there is the texture matrix stack that can be used to transform the texture coordinates before performing the lookup.
You can get the same effect by passing another matrix to your shader that you apply to the texture coords to counteract the rotation of the ball.