- When a polygon(s) is completely submerged in OpenGL’s fog, does this simply give the programmer the opportunity to not tell OpenGL to render the polygon, or does OpenGL automatically gain efficiency by not rendering the polygon?
In other words is there anything I have to do besides initializing the fog to gain efficiency.
- If a polygon(s) is located off screen it is pointless to render it, does OpenGL automatically gain efficiency by not rendering the polygon? Or does the programmer have to detect the polygon is off screen and choose not to render it.
In both cases you should do what Relic suggested in #2 - clip geometry yourself using bounding boxes or any other approach. For objects partially out of range you can just use the far clipping plane.
If you are lazy, you can use display lists to draw your objects - from what I observed display lists in NVIDIA drivers have some frustum culling optimizations.
I’ve observed this on 500k polygon model which I stored completely in display list. When I zoomed in it started to render slower and slower due to larger space occupied on the screen. But when object became larger than screen it started to render much faster again. So it appeared to me that driver has split my objects into smaller parts and performed frustum culling for me.
I see, thank you Relic & k_szczech.
The implementation that I was trying to improve involves terrain, very large sheets of polygons that wouldn’t be high detail up close but all in all would consist of many polygons. (as you know I’m sure, just saying)
The terrain is indeed a display list, let me do the test you mentioned and get back to you. (I happen to have an ATI card)