Simplifying distant objects

I’m still trying to wrap my head around this OpenGL thing, teaching myself as I go. One thing that’s been bugging me is the number of polygons some programs seem to be able to render in a small amount of time.

I get backface culling and clipping to the view frustrum, but the thing that bothers me about the latter is that all the books I have read on the subject suggest that there is some kind of far clip plane that drops all objects beyond.

This makes perfect sense to me, but some real-time worlds seem to have an incredibly distant back plane. Is that normal, making the far end of the viewing frustrum waaay out there? Or do most programs not use one at all. I haven’t noticed objects snapping into existance in the distance in modern games.

One thing I’ve seen mentioned is multiple models of increasing detail. Basically mipmaps for models instead of textures. Does OpenGL have native support for this technique? (My books seem recent, up to 1.2, and I don’t see it, but…) Is this technique often used?

I put a lot of thought into the problem of scene simplification before sending it to the hardware. One of the things that seems to solve this problem is simplifying a distant complex model into a very simple model - only one or two polygons that cover the view of the object that the camera sees. At distance, you wouldn’t expect this to change much between frames. How much payoff could I really expect from doing that?

First of all I don’t do games, but I can answer most of the questions.

A very high ratio of zFar/zNear kills your z-buffer precision and gives you awful z-buffer bleeding effects.
One method to cope with this is to keep the ratio small which is best solved with a bigger zNear.
To give the impression of depth you could add a good backdrop plane with the “view to infinity” on the texture. As long as no object is there it’s visually ok. Look for tutorials on sky domes and billboarding.
The z-buffer problem can also be solved with rendering multiple depth areas of your scene starting from the front to give you less overdraw (more z tests fail).

Simplifying models is a very common method. You can store multiple LODs of you model and choose based on distance. But this gives you polygon popping effects as these models have different resolutions and cannot be interpolated easily.

Another method called Multi Resolution Meshes (MRM) from Intel (look on their Intel Architecture Labs site) does simplification dynamically with one full res model and a list of vertices being left away for smaller models. This allows dynamic adaption to both size on screen and speed of the graphics accelerator.

Look for dynamic terrain models, too.

Another method to give the impression of distance is fog. Models will not be visible at the zFar plane immediately but fade into the scene.

And the replacement of complex models with a billboard is still a nice optimization.
The gain depends on state changes required to render the billboard and overhead to orient the quad. Sort by texture in case different animation phases.

[This message has been edited by Relic (edited 08-03-2000).]

>First of all I don’t do games,

just women, huh?

>but I can answer most of the questions.
NO doubt about that! You’re pretty good at this stuff!

>Look for tutorials on sky domes and billboarding.
I know has one in their 3dfx tutorials!

Thanks for the helpful answer. It’s actually not a game so much as it is a walkthrough that I am in the planning stages of, and yes I did notice the model popping with the non-OpenGl design I currently have.

What exactly are the visual effects of z-buffer “bleeding”? Does this mean the possibilty of a number underflow such that the wrong pixels get rendered when two polygons are too close? I do have that problem and thought I’d have to live with it to get a long field of view.

I did toy with the possibility of rendering the distance first in one frustrum, then the near frustrum, but the mechanics of dealing with polygons straddling the near/far frustrum line gave me a headache. I tried overlapping the frustrums for a little while then wrote it off for future experimentation. Would you happen to know of a page online where I can get an idea of a plan that works (just to avoid opening that can of worms again)? Fogging works great as a temporary fix for now, but it’s probably better suited to moody games than a virtual tour. (Who wants to take a tour on a foggy day - I’ve been getting away so far because Hamilton is a steel town and I say it’s pollution.)

Those MRM’s look great. The page doesn’t make clear whether I’m allowed to use that kind of idea in my engine. I can get an idea for how it’s going to work already. Is it proprietary?

And finally the skydomes - that flipcode article was awesome. What I was doing was complete amateur! Ah well, you live and learn.

>> What exactly are the visual effects of z-buffer “bleeding”? Does this mean the possibilty of a number underflow such that the wrong pixels get rendered when two polygons are too close?<<

For example with 16 bits depth and your zFar/zNear something like 100.000/1.0 you have not enough depth buffer precision to put a polygon in each whole z distance without some of them bleeding.
And to make the effect even worse, most of the precision is near the front of the viewing frustum and less at the far end due to the floating number representaton.

This is the reason why something like the w-buffer came up to get a more linear distribution. (Look in the MSDN Platform SDK under Direct3D for more infos)
I’m not sure if there is already an OpenGL extension available, at least I didn’t found one in SGI’s extension list.

I’ve read a SIGGRAPH paper last year on inverting the z-depth yourself by sending 1/z values and changing the depth test. Just an idea.

Unfortunately I haven’t got a paper on rendering partitioned z-areas.

[This message has been edited by Relic (edited 08-06-2000).]