Reflections look bad!!!

I’ve done quite a bit of work over the last few weeks on dynamic cube mapping and I have been very pleased with the results, but most of my testing has been using curved objects like spheres. I’ve been trying to make a demo with a lake, but flat objects seem to look crap.

I only get a problem if the reflected object is relatively close to the cube mapped object, compared to the viewport. The reflection of the close object seems to be unrealistically large, in fact as you zoom in and out the reflection stays the same size in pixels. Using the same code but with a cube mapped sphere this doesn’t happen.

I read somewhere that cube mapping assumes the reflected objects are an infinite distance from the cube map origin, and in an Alan Watt book that humans are not disturbed by ‘wrong’ reflections on curved objects.

So, after this long winded explanation, is this something I’m going to have to live with or am I doing something wrong???

I probably have no idea how to solve your problem, but…

as far as reflections go, for not being the right size. I noticed this for a sphere.

The dynamic cube maps, were generated with the camera at the centre of the sphere, and then the 6 renders took place. However, the actual reflection, was 8 units away from the camera point, (outa side of the sphere) so that when an object if practically touching the sphere, the reflection was too small, as I wasn’t rendering the reflection, from the point on the sphere, but inside the sphere.

Hope that makes sense. Haven’t found a solution to this yet… or if it’s true, but it looked it to me…


I have this problem too, but it is much less noticable than the problem with the flat surface.

I think it all depends on the way the s,t,r is calculated. I assume it is calculated with respect to the vertex normals AND the positions of each vertex, as I think this should be enough to give accurate reflections, but if it is fudged some other way in hardware, maybe there is no fix. If the position of the vertex is not taken directly into account a flat surface with all the vertex normals parallel could give odd effects, but I am guessing.

I have just seen that nvidia has a demo (waves) which does the sort of thing I’m looking for (correctly), so I’ll work though their code.


This is a fundamental limitation of “environment mapping” in general. A cube map (or sphere map, or dual paraboloid map) captures the environment or a single point in space. This is a pretty good approximation if the environment is “very far away” from the object – relative to the size of the object. It becomes a very bad approximation when things in the environment are very close to the object (again, relative to the size of the object).

Does this make sense?

Thanks -

when you have a completely flat see, its simple to do it complete correctly… just look for stencilbuffermirrors… then you can copy this to a texture and use hm… viewaligntexgen or something like that to just remap the reflection at the same place, but then distort it with a grid wich you move around ( this is pervertex done then… when you want to do it per pixel you need a gf3… but for perpixelreflections with dynamic cubemaps, you need gf3, too… currently you can just do pervertex reflections… )

davepermen, I assume that, when you speak of the difference between “per-pixel” cubemapping and “per-vertex” cubemapping, you are talking about the general problems with interpolating the 3D texture coordinates across the cube map, right? And the particular hardware upgrade in a GF3 that fixes this is the texture shaders, correct?

I just wanted to be clear on this issue, since, from what I can gather, texture mapping itself is purely per-pixel.

no… i mean you cant do reflections on a perpixelbase, means per pixel reflecting at the per pixel normal ( -map )… now you know what i mean?

and yes, its interpolated wrong… dont look nice sometimes…

you can fix it with the gf3, yes… but its the same like lighing… gf3 supports phongshading(-effects… not the real… damn nvidia… ) and like that you can solve the problem of per-vertex lighting… but you can add bumpmapping at the same time, too…

and this is the same for reflections… you can add “bumped” reflections… some times called blinn bump reflections… or environment cubic mapped per pixel reflection mapping ( - on a per pixel base etc… i like long sentences for in fact simple things… )

i can give some pic-links if you want…

you simply cant get real reflections on todays hardware except for planes… and like that you can just choose the best type for specific object you want to have reflections on… ( far away not colliding->cubemap, flat, huge->stencilbuffer_copysubimage_map or perhaps some new technology… )

we have to wait for a raytracer for real reflections… then we would not have any problems for it at all… but for now noone would do the step… ( it would be great, but it is just not possible today i think… i will buy sometimes a doubleprocessorboard and put two durons overclocked to one gig on and then try to get a realtime raytracer( a fast one ) rendering some 3ds-meshes… would be great… we will see… but for now every reflection is just a simple texture-effect… nothing less, nothing more… and texture-effects aren’t real…

todays gpu lacks on globaleffects… thats the whole problem of the rasterizerideology… every poligon its own… like that you cant get real lighting ( - shadows ), and no real reflections…

Ok on the subject of generating a dynamicly rendered texture.

most people seem to
-resize window to size of texture
-render image
-use glCopyTexImage() or such to copy to window
-resize window to size of window

I was thinking of
-create a second frame buffer (same size as texture)
–use glReadBuffer() to select it for reading
-render to the second buffer
-use glCopyTeximage to copy it to a texture
-draw texture to my origenal frame buffer

any comments,

If my sencond frame buffer has a alpha value in the GlClearColor() does it get sent
to the dynamic texture. Ie can I make the background of the frame buffer a transparent section in my new texture.