I was reading this article which states that environment mapping works poorly for flat surfaces such as mirrors, or when reflecting close objects. I am sure this is right but I’d like to understand why.
Because environment mapping depends solely on direction and not on position, it works poorly on flat reflective surfaces such as mirrors, where the reflections depend heavily on position.
But isn’t the direction determined by the positions? I mean, every fragment has a unique position and so a unique direction pointing to it.
Furthermore, if I move the mesh and/or the camera, the direction to the fragment will change, resulting in a different reflected ray. If I rotate the camera then both the direction to the fragment and its normal will change, but they should compensate each other, resulting in the same reflected ray. This is how a real mirror behaves.
So even if the positions are not directly used, they actually produce the directions. Why then is this technique inappropriate when applied to flat reflective surfaces, or when reflecting close objects? What should one use instead?
Environment mapping uses the reflected ray to perform a lookup in a cube map (or other 2D texture). The world reflected in the mirror is the world viewed from whichever viewpoint was used to render the cube map. So a cube map is only a good approximation in the case where changes to the viewpoint have negligible effect (i.e. all objects are far from the viewpoint).
If you re-render the cube map every frame, the viewpoint will be correct, but this either is extremely inefficient or will result in reduced quality (due to texture filtering) or both.
Render the scene reflected in the plane of the mirror, clipped to the mirror’s outline.
[QUOTE=GClements;1279327]So a cube map is only a good approximation in the case where changes to the viewpoint have negligible effect (i.e. all objects are far from the viewpoint).
If the scene itself has not changed but only the position of the camera, then the cubemap should not be affected, right? Are the reflections proper in this case?
[QUOTE=GClements;1279327]If you re-render the cube map every frame, the viewpoint will be correct, but this either is extremely inefficient or will result in reduced quality (due to texture filtering) or both.
Wrong. A cubemap is only correct for a specific viewpoint. This is why static cube maps are normally only used for a) “skyboxes”, where everything rendered into the cube map is far away from the viewpoint compared to toe the range of movement of viewpoint, and b) environment mapping, where the reflections are heavily distorted by the irregularities of the surface, so that the cube map only needs to roughly approximate the scene.
Shadow maps are rendered with the light source as the viewpoint. If the light source moves, you need to re-render the shadow map.
I am not sure I understand. The cubemap is rendered with its own camera, placed where the reflecting surface was and facing, in turn, the three axis in both positive and negative direction. If the reflecting mesh doesn’t move, the scene around it has not changed, how should I regenerate it when I move only the final OpenGL camera?
I never implemented it, but I suppose one has to rebuild even if the scene seen by the light has changed. In this case it is likely to happen every frame.
The cube map would need to be be rendered from the point which is the reflection of the actual viewpoint in the plane of the mirror.
E.g. if the plane of the mirror is the X=0 plane, and the viewpoint (camera position) is (X,Y,Z), the cube map would need to be rendered with the viewpoint at (-X,Y,Z).
But rendering a cube map for a perfect mirror is the wrong approach, as it will be far more computationally expensive than simply rendering the scene with the camera reflected in the mirror (both the position and direction should be reflected). Rendering a cube map requires that you render the scene six times; rendering the scene normally only requires rendering it once. And each face of the cube map would need to have comparable resolution to the screen, otherwise the reflected image is going to be blurred by texture filtering.
For dynamic cube maps (where you re-render the cube map each frame), you normally render a greatly simplified version of the scene (lower-detail meshes, lower-resolution textures, “billboards” in place of meshes, etc), and at a lower resolution. But you can only get away with that if the reflections are imperfect. For a perfect mirror, the rendering needs to be at full detail.
[QUOTE=GClements;1279336]The cube map would need to be be rendered from the point which is the reflection of the actual viewpoint in the plane of the mirror.
E.g. if the plane of the mirror is the X=0 plane, and the viewpoint (camera position) is (X,Y,Z), the cube map would need to be rendered with the viewpoint at (-X,Y,Z).[/QUOTE]
Got it. So one has to pay attention when designing scenes… if the x=0 plane is a 3D wall between two accessible rooms, there must be an hole in correspondence of the mirror, otherwise instead of the reflection it’d appear the wall from the other side.
One last thing… how exactly do you produce the texturing coordinates for the reflecting mesh?
The mesh itself doesn’t necessarily need texture coordinates.
In the fragment shader, the eye vector is reflected about the surface normal and the resulting vector is used as the texture coordinates for the cube map lookup. For a perfect mirror, the normal will be constant, but environment mapping typically uses a normal map to model the shape of the surface (in which case, the mesh will need texture coordinates to indicate how the normal map is mapped to the mesh).
I was thinking more to a planar mirror, where I render the scene reflected into a framebuffer object. When rendering the actual scene, I thought to use the position of the fragment within the viewport to perform an absolute texture lookup into the FBO.
That will work. It’s basically a blit, clipped to the mirror’s outline.
Or you could render the non-reflected scene first, render the mirror’s surface into the stencil buffer, clear the depth buffer, then render the reflected scene through the stencil. That avoids rendering into a FBO then reading back out.