I am implementing some shadow mapping algorithms, all of which need to split the frustum along the z direction. Thus the first problem comes to me is how to calculate the minima and maxima distance from the light to an object, so that I can split the frustum of the light with min+(max-min)/n*i, i represents the ith layer, and n is the total number of layers.
One idea is to do this in CPU, transform every vertex into the light space, and pick up the minima and maxima z value. However, I am rendering an object which contains several thousand points, thus I have to do the matrix multiplication several thousand times for all the points, which will cost large amount of time.
Does anybody have better suggestions?
It sounds like Sample Distribution Shadow Maps (SDSM) is what you want. More on this here:
Basically, use the GPU to render a depth map (possibly reduced res), then crunch that on the CPU or on the GPU – whatever ends up being fastest on your target hardware – to get your statistics. In my experience, on older cards (e.g. NV GTX285), readback and CPU crunch is faster. On newer cards (e.g. GTX480) where readback has been crippled, GPU crunch is faster. Anyway, always do readback and CPU crunch first because it’s ridiculously easy to implement.
Another approach is to just pick fixed near and far distances that should always work, compute static split distances from that, and use Valient’s stable cascaded shadow maps approach to avoid shadow crawling on stationary objs.
Valuable resources and detail explanation. Thank you very much.