Another way to do shadows

This idea occured to me today. Maybe it’s already been done and I’m reinventing the wheel …

It’s broken, and slow :-).

Consider polygon B. How do you know It doesn’t merely occlude the line of illumination as viewed from the eye as opposed to being intersected by the line of illumination?

[This message has been edited by dorbie (edited 02-25-2003).]

Originally posted by dorbie:
It’s broken, and slow :-).

It’s will be slow as hell, but we’re exploring possibilities here.

If polygon B occludes the “center area” of the ray, then along those pixels belonging to the line, the depth test will fail. The depth test will only succeed at the Polygon A’s pixel (let’s assume precision is not an issue for now)

Total number of depth test passes = 1, so pixel must be lit.

PS: I have another radical idea, but more troublesome to implement. Want to give it a try?

reminds me of a method i wrote a while ago
note it doesnt give perfect results (or anywhere near) but i was looking at using it to reject some pixels quickly with a raytracer

depth values gained from glreadpixels (once per frame)

A/ get depth of light
B/ get depth of the pixel u want
C/ with a 2d line (eg breshnam line) compare the depth values on this line

i did write up an app but IIRC it only got ~10fps with a small window (+ of course didnt give accurate results, just the early rejects) i gave up on it

It’s basically ray casting only instead of doing the calc yourself you’re “sketching” the results to find the answer.

You’d probably find that you get errors due to inaccuracies in the rendering of the lines.

V-man, I still say it’s broken.

This is so obviously wrong as described, that now I’m thinking stage 3 is just misinterpreted.

Please define the test in stage 3 more clearly so I can try to see what you’re proposing. I can think of several interpretations, none of which work well, especially in scenarios with > 2 polygons.

So what is the test in stage 3 exactly.

[This message has been edited by dorbie (edited 02-25-2003).]

You have to understand stage 2, because that is the key to this thing. There are 3 cases for stage 2, so I will explain the case of a ray shooting through Polygon B.

In stage 2, imagine a line getting rasterized from the Polygon A’s pixel towards the light source.

When the first pixel is rasterized, the depth of the line pixel = depth of Polygon A pixel.

COUNTER++; <— this happens as a result since depth test passed

then a series of pixels are rasterized but in no case does depth test passes.

As soon as it reaches the point of intersection (Polygon B and GL_LINE)
depth of Polygon B pixel = depth of line pixel

COUNTER++; <— again!

In stage 3, I say that if 2 or more depth passes occur, then the pixel is not lit, as is the case of a ray going through some other polygon.

The rasterization of the line could terminate here.

Nothing pisses me off more than precision problems. Grrrr. Perhaps lowering the precision of the depth buffer may help in this case.

As far as I can tell, it’s conceptually flawed.

Suppose these are cubes, 1 meter in size. Suppose cube on the left is 10 meters into the screen. Suppose cube on the right is 5 meters into the screen. Suppose the light source is 7 meters into the screen. In that case, the light ray should pass BEHIND the rightmost cube. Because the depth complexity of Z buffers is exactly 1, you cannot possibly store enough information to make this case work.

With the method, as suggested, you will only be able to render “pinboard” scenes, where all polygons extend infinitely far into the screen.

Originally posted by jwatte:
With the method, as suggested, you will only be able to render “pinboard” scenes, where all polygons extend infinitely far into the screen.

An excellent point. You only have depth information for the front facing polygons, so you will only have shadows for those polygons, which means half your shadows will be missing.

Yep it’s clearly broken. Sorry V-man :slight_smile:

I don’t think it only renders pinboards (that was my first interpretation and why I initially thought it was broken). The z EQUALS is the test (let’s assume precision is not the issue) I think it would render pinboards incorrectly but eye space occlusion is clearly the big issue. All occluded objects would not cast shadows and the back faces of objects would not cast correctly.

There are several variations on this, but I can’t think of one that works, the pinboard rendering version is one variation where you change the test from EQUALS to GREATER.

[This message has been edited by dorbie (edited 02-25-2003).]

Another question: what happens for an object that is not view, although its shadow is ?

Let’s say you have a cube outside the view frustum, casting a shadow on a ground plane just in front of you. The light is also on the other side of the cube, outside the view too.

Get a shadowed pixel from the ground, trace a line to the light source. That line is supposed to intersect the cube occluder, but that intersection is actually outside the viewport. And i don’t think pixels outside the viewport have a ZBuffer value… your intersection counter will be seriously messed up.


[This message has been edited by Ysaneya (edited 02-26-2003).]

Ok, it won’t work perfectly because of

  • backfaces beeing occluded
  • back objects " "
  • intersection point beeing outside the buffer

I’ll add it to the page later on