here is the idea. i’m rendering the stencil buffer, but rather than rendering across the entire screen and causing unnecesarry fill rate i’m only going to render the bounding boxes of the geometry in the scene which will possess shadows on their surfaces. this is very good for my application because it is much of objects floating in empty space similar to popular 3d mesh editing systems such as maya or 3dmax, the technique might prove convenient for outer space scenes as well. anyhow the idea is to project the 3 visible planes of the bounding box to raster space and render them with an alpha component and depth testing so the the alpha components will not overlap. but the projection step could be avoided if i could just say that every pixel rendered gets a depth of say something just outside the near plane so to avoid overlapping alpha blending via a depth test. so i’m asking does this functionality exist in this form or perhaps some other.
so in short i would like to render so that every drawn pixel has the same predetermined depth in the depth buffer regardless of its actual depth.
If you create your modelview using a camera-and-object composition, then any geometry whose vertices lie in some plane given by the camera “into screen” vector and a fixed offset C will map to the same Z value.
However, I doubt you’ll get any better results this way than just drawing the full-screen quad. Stencil testing will cull most un-needed fragments, so framebuffer write rate is lower than if you were drawing everything.
Anyway, that being said, it sounds as if you do the Microsoft DirectX trick of drawing one darkening quad over the entire screen after painting the full scene. This doesn’t look convincing, in my opinion. Instead, render the scene once per light source, and cull out fragments in the shadow volumes. That way, you get a much closer model to how light/shadow actually interact.
i’m afraid i don’t follow your suggestion. it sounds as if it envolves some form of projection which i’m already doing successfully. the idea is to skip the projection step. the projection is actually redundant because it is handled in hardware during the rasterizing stage of the opengl pipeline.
as for performance issues it is much faster than drawing a polygon which must fill over the entire screen which creates about the worst case scenario in the fill department.
it might not be as concincing but i couldn’t imagine redrawing the entire system. besides personally i don’t see much difference.
and since you mentioned direct x… i wouldn’t touch it with a ten foot pole.
my regards for your input,
the solution to my dilemma is glDepthRange(). if you set the parameters equal the depth will always be clamped to that value achieving the effect i was looking for.