# fragment position

lengyel wrote a paper on soft shadows via penumbra wedges, in this paper he determines whether a fragment is in some bounding volume - the wegde - by looking at the 4d plane vectors that define the wedge and the fragments <x,y,z, w=1> (z supplied via depth map)

by determining the dot product between the plane vectors and a fragment the fragment is discarded if it lies outside e.g. all dot products < 0.

My question is now, why is this possible. Wouln’t it be necessary to transform the fragments back to
world space, since this is where the normals are defined. The position of a fragment is different from its position in world space so how is one able to derive this test on the fragments position or do the normals stay invariant when transformed to clip space …

Could someone explain please this to me ?

I don’t know how he did it, but you could, for example, project texture coordinates onto a volume and use those in any space to perform your test.

In a vertex shader you can put the world position in a texture coordinate which gets linearly interpolated giving you the world space position of each fragment

Wouln’t it be necessary to transform the fragments back to world space, since this is where the normals are defined.
If I’m not mistaken, he does all this in screen-space.

this is what he does

Penumbral Wedge Rendering
• What’s a viewport-space point in frame buffer?

– The The x and and y viewport coordinates are
available to fragment programs in the
fragment.position register
– We need to read the z coordinate coordinate
from a depth texture
– The coordinates ( The coordinates (x, y, z, 1) give the location of the point already rendered

Bounding plane tests

• In preceding code, texture[0] is a copy of the depth buffer
• Texture coordinates 0, 1, 2 hold
the 4-component plane vectors for the three outside bounding planes
– If the dot product between surface point and any plane negative, then the point is outside
the half-wedge wedge

I wrote an email to mr lengyel and this is response

Hi –

The way that this is possible is that the bounding planes are transformed
all the way into viewport space.

I actually construct the bounding planes in object space and then multiply
them by the product of the matrices going from object space to world space to camera space,
followed by the projection matrix.

To get to viewport space, you still need to multiply by the matrix
that goes from normalized device coordinates to screen coordinates.
This is simply the transform that maps x from [-1,1] to [0,w], maps y from [-1,1] to
[0,h], and maps z from [-1,1] to [dmin,dmax], where w and h are the width
and height of the viewport, and [dmin,dmax] is the depth range (ordinarily 0
to 1).

The plane is actually transformed from object space to viewport
space by multiplying it by the inverse transpose of the product of all of
these matrices.

The nice thing is that you don’t need to divide by the
w-coordinate to do the plane tests. So basically, the vertex program only
has to perform a matrix-vector multiplication for each plane, and then the
plane tests can be done in the fragment program with one dot product each.

I just noticed that I had another email in my inbox from you (I knew I’d
seen the name before). I can’t remember if I had returned an answer to your
questions, however. Go ahead and send them again if you haven’t already