I’m still trying to wrap my head around this OpenGL thing, teaching myself as I go. One thing that’s been bugging me is the number of polygons some programs seem to be able to render in a small amount of time.
I get backface culling and clipping to the view frustrum, but the thing that bothers me about the latter is that all the books I have read on the subject suggest that there is some kind of far clip plane that drops all objects beyond.
This makes perfect sense to me, but some real-time worlds seem to have an incredibly distant back plane. Is that normal, making the far end of the viewing frustrum waaay out there? Or do most programs not use one at all. I haven’t noticed objects snapping into existance in the distance in modern games.
One thing I’ve seen mentioned is multiple models of increasing detail. Basically mipmaps for models instead of textures. Does OpenGL have native support for this technique? (My books seem recent, up to 1.2, and I don’t see it, but…) Is this technique often used?
I put a lot of thought into the problem of scene simplification before sending it to the hardware. One of the things that seems to solve this problem is simplifying a distant complex model into a very simple model - only one or two polygons that cover the view of the object that the camera sees. At distance, you wouldn’t expect this to change much between frames. How much payoff could I really expect from doing that?