Line-of-Sight algorithms/methods

I need to add a line-of-sight overlay to our military map application. The general idea is that the user will select a location on the map and an altitude above ground at that location at which point this function will create a line-of-sight shadow map which will transparently overlay the map to indicate which areas of the map do and do not have line-of-sight visibility to the selected point. I have a complete database of terrain height fields for all pertinent areas.

I attempted several searches through the OpenGL advanced section using keywords LOS and Line Of Sight but found no relevant postings.

My general plan is to start rendering the terrain with the camera at the selected location and altitude with each triangle of the terrain given a unique color. (Depth Buffer updates would be on, lighting, blending and anti-aliasing would be turned off.) As each triangle is rendered, its color ID is added to a list. I would probably render with a 120 degree wide view frustum and perform 3 passes for a total of 360 degrees. After each pass, I would grab the contents of the color buffer and dig thru it 1 pixel at a time looking for any color IDs other than the glClearColor. As I find each color ID, I look thru the list of triangle color IDs and remove it from the list if it is found. Once all 3 passes have completed, I will have a list of all triangle color IDs which are NOT visible from the viewpoint. Then I render the entire scene from above using only those triangles referenced in the list and I have an LOS shadow map.

This seems to be a functionally workable approach but, as I am hoping to use this real-time (or nearly so), I would like to be sure it is as optimized as possible.

Are there tricks of which I am unaware, perhaps using the stencil buffer, which might speed this process?

Thanks in advance.


I think your best bet might be to go for a stencil-shadow approach. Treat the point from which you want to determine visibility as a point light source. Anything that is in shadow is not visible from that point. The steps would be:

  1. render terrain like normal
  2. render shadowed areas into stencil buffer (there’s lots of info out there about how to do this)
  3. blend a semi-transparent quad over stenciled pixels using whatever color/highlighting you want to represent the shadows as.

I feel that you’ll be disappointed with the performance of your readback idea, and some triangles may rasterize to less than 1 pixel, so you might get holes in your visibility map.

– Zeno

Originally posted by Machiavelli:
I attempted several searches through the OpenGL advanced section using keywords LOS and Line Of Sight but found no relevant postings.

What you are trying to calculate is commonly known as visibility. The algorithm you describe is a variation of the hemi-cube algorithm, but instead the full cube, or sphere. You should search for keywords such as hemi-cube,radiosity and visibility. Check out for more detailed papers on the various hemi-cube algorithms.

As Zeno said, this is just a different way of phrasing the standard per-pixel shadowing problem for a point light source. All of the techniques for per-pixel shadowing of a point source would work for this problem, including stencil shadows and shadow maps. Stencil shadows would probably be easier in this case.

One other thing: I would strongly suggest avoiding texture-based solutions to your problem (shadowmapping).

The first reason is that your “visibility origin” is like a point light, which would require six shadow map textures (although you could probably get away with 5, since you won’t need to look up).

The second reason is that you’re doing this on a large terrain database, not a small indoor scene. Shadowmaps will start to exhibit horrible aliasing when you get a good distance away from the point whose visibility you’re interested in.

Just a comment…

I have made an implementation for LOS and Area Coverage in Gizmo3D and the experience from that is that shadow volume extraction doesn’t work in real time for large database sets. You also probably have the problem with LOD levels that are dependant on view position and in most cases you calculate the LOS from another LOD position than your actual camera position wich means that the shadow generated volume and the view geometry is not the same and you get differences in the shadow volume intersection -> faulty LOS.

There are ways to do correct LOS in real time but not with shadow volumes…

If you do decide to use the readback idea readpixels will be your bottleneck. Search for ‘readpixels’ in this forum, there’s a few posts on how to speed it up. If you are using NVidia hardware you can use the pixel data range extension which will let you perform readpixels asynchronously. On NVidia cards you should get about 50Mb/sec.

Alternatively you could use the NV_occlusion_query extension (which works on ATI cards to) Using this extension you could draw bounding boxes around chunks of your terrain geometry to determine visibility. You could continually subdivide those chunks and perform the test again until your framerate drops to a minimum level.

What about a variation of the shadow buffer technique? Possibly combined with a stencil buffer operation for the visibility “cone” (aka frustum)?

Thanks to all for the feedback. I will be looking into your suggestions and writing some prototype code to check out the feasibility of using the stencil shadows. I have some reading to do, though, before I completely understand the process.

Thanks again.

It seems counterintuitive but a stencil shadow approach will not answer your question. How do you determine if an object is in shadow or not & therefore visible. You’d have to draw it and readback with a viewpoint that was prone to errors (possibly) etc. You can draw from the viewer without shadows and see if something is visible much simpler if you’re going to go to that trouble. The starting assumption is that something computed on the CPU can be faster than a brute force approach, afterall the point may be to draw less stuff not more.

[This message has been edited by dorbie (edited 07-17-2003).]

Some type of beam tracing might do it.

You speed these things up by possibly approximating the test or making it hierarchical.

Compute some rough minimally enclosing volume for the occluders, or some fast voxel or quad tree grid for terrain. Test the beam to the bounds of the object against the structure you create. You can then refine the result (optionally) if you get a visible result (or better if you get an ambiguous result) visible invisible early reject could be a goal of a decent scheme, but you’d need min & max enclosing volumes on occluders and perhaps max & min on the target. You really have so many choices about how you do this and your decisions might depend on the amount of tests, the complexity of the scene and the required accuracy of the results.

Some approaches might project simple bounds to screen space, others stick to 3D. Pick your poison.

[This message has been edited by dorbie (edited 07-17-2003).]

After spending some amount of time mulling this over, something occurred to me that sounds like it might be a good solution.

I render using the method I described in the original post, except with glRenderMode(GL_FEEDBACK).

Then I read each element of the feedback buffer, each of which will be a visible triangle (or a portion thereof). I simply use the feedback buffer in place of digging through the color buffer one pixel at a time. The more I think about this, the more I like it. Unless someone points out a critical show-stopper, I will likely make this my first attempt.



The feedback buffer will contain all polys within the viewing frustum, including the occluded ones. It’s also very slow.

If your occluders are just terrain and buildings, take a look at horizon maps or occlusion horizons, they might serve you better.



Thanks for the reality check. Sure enough, after checking the RedBook, there it is in black and white; I may still see fragments even if they have failed scissor, alpha, depth, or stencil tests… (3rd Edition, pg 560)


back to stencil shadows…


Did a quick search/review of Occlusion Horizons. The technique looks like a promising way to speed up the rendering and make this LOS test in real-time a possibility.

Thanks for the tip.


I can give you a hint for what I have done…

Imagine you could render your object view into a depth buffer… and then create a volume out of this depth buffer that adapts to the LOD level that you have in your scene. Then you could use the depth volume (whis is so much faster and smaller than any volume created out of your original terrain geometry. see silouette rendering) in a simple stencil shadow volume…

imagine you could have so complex scenes and all the time spent in your algo would be to render the depth buffer from your observers POV wich is realtime if your app is realtime.

This is one side effect of what I do with the IBR demo in another thread.

Couldn’t resist to talk about this… sorry…