Reading the depth buffer into texture memory

I mean the first Image linked in the last post before mine.
is this effect generally caused by hard-shadows as there is no softening?
I’ve read some articles about soft-shadows but if I remember right those did some arbitrary blurring of the shadow-edges that did not try to take the distances between the light and the shadow-caster and the one between the shadow-caster and the shadowed surface into account.
The thought is that the blurring would have to be based on 1. the light’s area/volume 2. the distance between the light and the shadow caster (those two will alledgedly be influencing the blurring at most) and 3. the distance between the shadow caster and the shadowed surface as when crossing a medium light will diffute further.

EDIT: With jumping out of the plane I mean the shadow cast onto the larger box. The shadow seems to have a depth on it’s right side

I think it’s just the shape of the shadow, combined with the fact that the shadow is completely black.

There are several different types of softening. One is to anti-alias edges by comparing adjacent texels in order to avoid the pixellation (the projected shadow pixels may be much larger than a screen pixel); recent hardware may do this automatically for shadow samplers. Another is to vary the shadow intensity based upon the distance between the caster and the receiver and the distance between the caster and the light (this still gives hard edges but the shadow fades with distance). Yet another is to cast multiple shadows to simulate umbra and penumbra regions.

There are probably others. Shadows can’t be done both efficiently and correctly, so there’s a lot of research into getting better approximations with reasonable performance.

The low Resolution of shadow-maps may prevent this but: wouldn’t it work to detect the edges of the shadow by fetching a - for simplicity - 3x3 (or better more) depth-value-matrix and determining how far away from the shadows edge one is right now? This would require knowing the depth-value of the current fragment in respect to the light. If no value in the depth-matrix matches is near to the fragment’s depth it is far away from the shadow’s edge and hence the shadow is at maximal intensity. If at least one value matches one is near the edge and has to take the blurring-parameters into account.
Or something like that. I guess I’ll have a look at this more thoroughly.

Yes, that would make the edges of the shadow fuzzy. But it wouldn’t be correct soft shadows.

Soft shadows happen because lights aren’t point lights. They have area, and thus an occluding surface can partially block a light. However, the size of the softness of the shadow depends on a number of factors, such as the distance between the surface being potentially shadowed and the potential occluder(s). The farther from the occluder(s) the point on the surface is, the “softer” the shadowing will be. Your method doesn’t take that into account. It just samples in a 3x3 matrix of pixels. This will cause shadows to be soft even when the distance to the occluder is small.

There’s also the relative size of the light source, as seen from the surface point. Your method comes close to approximating that, but it’s doing it from the wrong end. What you want is to have different occlusions, partially offset from one another, for various points within the light source.

Right. Correctly there is some kind of blurring whose are is defined by extrapolating the lines between the edge of the shadow-caster and the light’s outer Points onto the plane being shadowed.
But this is the area of varying degrees of light-occulsion only idealy meaning for light traveling through a vacuum.
Or isn’t the effect of softer shadows farther away from the occulder due to the diffusion of light traveling through a medium?

I didn’t mean to imply that those were the only factors in soft shadows. They’re just the two biggest contributors. But things like the diffusion of light through the medium deal with things that impact a lot more than just shadows. That’s starting to solve global illumination.

Which in more real-time terms, means that this should be covered via the ambient term, or you fake the ambient with a lot of small, weak, non-shadowing lights that you put everywhere. Or some hack of that kind.

Yes, and recent hardware may do this automatically. §8.22.1:

If the value of TEXTURE_MAG_FILTER is not NEAREST, or the value of TEXTURE_MIN_FILTER is not NEAREST or NEAREST_MIPMAP_NEAREST, then r may be computed by comparing more than one depth texture value to the texture reference value. The details of this are implementation-dependent, but r should be a value in the range [0; 1] which is proportional to the number of comparison passes or failures.

Although I suspect that 2x2 might be more likely, as that can re-use the 2x2 gather for bilinear filtering.

The primary reason for “soft” shadows is that real lights aren’t points, they have finite radius, resulting in umbra (regions of the receiving surface where the entire light source is occluded) and penumbra (regions where only part of the light source is occluded).

An extension of this principle is radiosity. In the presence of light, most surfaces which aren’t either matt black or perfect mirrors can be treated as diffuse light sources. Any light which is approximately omnidirectional will tend to illuminate the surface(s) in the immediate vicinity of the light quite brightly. The effect is to enlarge the light source, which will enlarge the penumbra and shrink the umbra (i.e. make the shadows softer).

Atmospheric diffusion no doubt plays some part, but unless the atmosphere is extremely hazy, it isn’t likely to be particularly significant compared to the above.

I tried to plot the idea here.
Is it actually worth a try to implement real interpolation towards the shadow’s edge based on the parameters?
This would involve - as far as I can think - fetching an area of depth-values at each fragment to see if one is in the yellow-bordered area. That is what I meant with fetching a Matrix. Sorry if I don’t understand properly what has been previously said but how would the implementation, the hardware be able to provide functionality to support this directly? One would need to know the size of one depth-texel in the scene to know the distance to the discontinuity of the depth values in world-space.

This is not meant to be the only aspect that accounts for calculating the contribution of the light to the shadow’s (soft) edge.