Photon maps ( like in www.gk.dtu.dk/~hwj by Henrik Wann Jensen ) are possible with a powerful GPU. This allows to make shadows, reflections and refractions, caustics…
Doesn’t photon mapping belong in a retained mode API? OpenGL is an immediate mode API. As far as I know, to perform photon mapping, you have to know the ENTIRE scene, which is not the case in OpenGL. OpenGL tosses all primitives into the rendering pipeline, and doesn’t remember anything about the previous primitives, which is a requirement if you want to do photon mapping.
Sure you can use a photon map but it has nothing to do with a powerful GPU. You must compute the photon map first ofcourse and even a great GPU doesn’t help you compute a caustic. Maybe with programmable vertex transformation you could do refraction through a SINGLE interface layer and render to the framebuffer and readback to a texture but it wouldn’t work for many situations.
But a photon map is really just a light map and those have been used in OpenGL since the days of Quake. The only difference for a photon map is that a photon map is by definition traced one photon at a time from the light to a surface. That is why it can calculate caustics well, many photons fired at an object will create a map on the surface which intersects the caustic. The caustic is effectively sampled at the surface with the density determined by the various refraction effects.