Admitted, this is not directly a GL question. But I guess, the advanced forum gathers the smartest people…

Well, what I need: to determine the footprint a pixel in the world at a certain distance to the viewer. This means I want to determine how large a pixel at this distance is (in world or object coordinates). I need the sidelength(s) of it and could use the area also.

I tried to just compute the inverse matrix chain:

X=(V*P*C*O)^-1

V viewport-transform

P projection

C camera transform (world to eye)

O Object transform (object to world)

and then just multiply (1,0,0,0) and (0,1,0,0) (“one pixel”) with it. This didn’t work out when there’s a scaling in either C or O involved.

I ended up with this method:

1.Using X, project three vertices P1(0,0,0,1) P2(1,0,0,1) and P3(0,1,0,1) into objectspace and divide them by w.

2.Calculate dx=P2-P1 and dy=P3-P1.

3.Determine the Lengths ldx of dx and ldy of dy.

This gives us the sidelengths of a pixel in objectcoordinates, placed at the nearplane.

4.Divide ldx and ldy by the nearplane distance

Now ldy and ldy act as factors which I can just multiply with some distance and get a pixel size back at this distance.

Is there some more elegant way to achieve the same? Especially, I’d like to know a method which is completely agnostic of the projection type and can handle any type of scaling…

thanks in advance!