Is there a way to actually discard fragments computation?

hello

I was wondering if there is a way with current hardware (let say, my geforce fx) to actually discard computation of some pixels during the fragment process ?
I’m not even talking about the “discard” instruction because it doesn’t actually prevent (from what I’ve heard) the fragment computation, but just cancel the result in the end.

Is there however a trick to discard computation of some fragment shader ?
For example, a first quick pass could “mark” (zbuffer, alpha, … ?) some pixels so they would not be rendered by a second pass.

Do you know if this is possible ?

see my recent post/s here, but no with a geforcefx theres no early out in the shaders, u have to use the normal stuff like depthtesting/alphatest which maybe performed before the shader (if the shader doesnt alter anything relavent eg using gl_FragDepth rules out early depthtesting )

Is there however a trick to discard computation of some fragment shader ?
For example, a first quick pass could “mark” (zbuffer, alpha, … ?) some pixels so they would not be rendered by a second pass.
You mean like the stencil buffer?

Or you can do use the depth buffer by setting it to 0.0 if you have depthfunc = GL_LESS or GL_LEQUAL in the shader. Of course, this will cost you.

Allright I’ll try with a depth test, hoping that my geforce will actually do this test before proceeding to the fragment computation.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.