And on those last two note, to make maximum use of the GPU’s ability to “pre-reject” triangles and fragments early (aka ZCULL and EarlyZ):
[ol][li] Clear the depth+stencil buffer Don’t change the depth value in your fragment shader Don’t change the direction of the depth test while writing depth Don’t enable stencil writes when doing stencil testing Don’t render to a 2D texture array (??) Write depth buffer with same test direction as is used for testing Don’t render a lot of little features Don’t allocate too many depth buffers Don’t use 32F depth buffers Don’t reference gl_FragCoord.z in your fragment shader Don’t enable depth or stencil writes or enable occlusion queries AND [list=1] - Use alpha test, or - Call discard, or - Use alpha-to-coverage, or - Use a SAMPLE_MASK != 0xFFFFFFFF[/ol][*] If you can, try to render polygons in generally a roughly front-to-back order.[/LIST][/li](Blatently ripped from the NVidia GPU programming guide.)
Also, if your fragment shading is “expensive”, then consider doing a “depth pre-pass” to set the depth buffer only, then rerender for shading with a DepthFunc of EQUAL. That way you don’t pay to shade any fragments that you can’t see. This first pass can also be “double-speed” if you follow the rules, so it’s not as expensive as you might think.