I was wondering if there is a way with current hardware (let say, my geforce fx) to actually discard computation of some pixels during the fragment process ?
I’m not even talking about the “discard” instruction because it doesn’t actually prevent (from what I’ve heard) the fragment computation, but just cancel the result in the end.
Is there however a trick to discard computation of some fragment shader ?
For example, a first quick pass could “mark” (zbuffer, alpha, … ?) some pixels so they would not be rendered by a second pass.
see my recent post/s here, but no with a geforcefx theres no early out in the shaders, u have to use the normal stuff like depthtesting/alphatest which maybe performed before the shader (if the shader doesnt alter anything relavent eg using gl_FragDepth rules out early depthtesting )
Is there however a trick to discard computation of some fragment shader ?
For example, a first quick pass could “mark” (zbuffer, alpha, … ?) some pixels so they would not be rendered by a second pass.
You mean like the stencil buffer?
Or you can do use the depth buffer by setting it to 0.0 if you have depthfunc = GL_LESS or GL_LEQUAL in the shader. Of course, this will cost you.