Optimization question

Having just learned how to use GLSL, I now have a little understanding of the graphics pipeline. If I understand it correctly, it seems that issuing just one glBegin/glEnd for an entire scene would render the fastest since the fragment shader would only have to compute each pixel once. Is this correct? I realize this probably is not practical in most cases but just wanted to see if I’m understanding the pipeline correctly.

I’ve been issuing a glBegin/End for each polygon and thought that as my scene becomes more complex, I would be better off by trying to minimize that and send as much through as I could each time.

Thanks,
Reg

You still have to learn :slight_smile:

If you care about performance, do not use immediate mode (glbegin/glend) : each gl call would have to be checked by the driver, sent to the card more or less synchonously, etc. Instead, use vertex arrays or vertex buffer objects instead. It is a much more compact way to specify the geometry to draw.

it seems that issuing just one glBegin/glEnd for an entire scene would render the fastest since the fragment shader would only have to compute each pixel once.

Wrong, glbegin/end has nothing to do with the fragment shader. The fragment shader executes for each fragment or each polygon. If you draw N polygons on top of another, it will multiply by N the number of fragments to evaluate (unless some hardware optimization can shortcut it, for example early Z reject under some conditions).

The reason I thought this would be the case is that if you draw your first polygon, it will go down the pipeline and result in a pixel being shaded by the fragment shader. If you then draw another polygon that obscures the first, the fragment shader replaces the pixel with the new value. From what I understood, the part of the pipeline between the vertex and fragment shaders would throw out any polygons that would not end up being drawn so the fragment shader wouldn’t waste any time rendering them. It seemed that by sending as many polygons as you could, you would eliminate sending the hidden ones to the fragment shader. Is this not the case or am I missing something else?

Also, I’m aware of VBOs but haven’t started learning how to do those yet.

“From what I understood, the part of the pipeline between the vertex and fragment shaders would throw out any polygons that would not end up being drawn”

Unfortunately most of the time this is not the case. Tricks such as early Z can happen if your polygons are drawn from front to back, without alpha testing, and with a shader that does not touch the Z value of fragments, and if the video card is in a good mood (ok, I exagerate a bit for the last one).

And anyway, such optimizations are completely unrelated to where you put your glBegin/glEnd.

Ok, I thought that the glEnd somehow signaled the pipeline to begin processing of the vertices which had been sent.

I looked back over the GLSL tutorial and the pipeline overview and I may have read more into what was there. Are all depth tests performed after the fragment shader? The tutorial listed back face and view frustrum culling as functions of the primitive assembly stage and I think I read into that depth based culling as well.

No. Like ZbuffeR said, it’s possible that early Z test can be performed if you follow a small set of guidelines. This way, the depth testing happens before the actual fragment shading so that the fragment shader for a pixel does not get executed if its depth test fails.

I see. Thanks for the info.

Some more info here. See the “depth in depth” paper.