OQ, color+depth+stencil

Looking at the ARB and NV version, the examples and documents say to optionally turn off depth and color buffers.

Why?

Seems to me in order for the OQ to work, you need to allow depth rendering so that when you do the OQ, some fragments would be rejected due to the depth test.

Does this look good

glClear(COLOR | DEPTH | STENCIL)
(maybe these disables are not necessary)
Disable color buffer
Disable blending
Disable fancy fragment shader

RenderOtherStuff();

beginQuery
RenderMyThing
endQuery

glClear(COLOR | DEPTH | STENCIL)
RenderEntireScene

I disable the color writing because I simply render nothing when doing my OQ first pass.
I also disable depth writing just because (as far as I can remember) it is done like that in the examples. But yes, I would also have thought that depth buffer could be of help for OQ. But it seems that the OQ uses something else as it has to know how many fragments pass the test for each entity you give to it (so it might be more difficult for it to know those ‘per-entity fragments’ passes and fails just because depth-test is just about pixels if I’m not wrong). So, maybe turning on the depth buffer will simply slow things down since the values need to enter the ‘OQ occlusion test’ and the default depth buffer.

For stencil there’s another topic talking a bit about that. But I must admit I didn’t really understood why it can be of help neither if it will interfer with potential current use of the stencil buffer.

Yes, I think a depth buffer is needed to know how many fragments pass, which means glDepthRange, glDepthFunc effect the behavior.

Secondly, I have 2 calls to glClear. What if I don’t want my window’s buffer to be touched when I do the query.

Why doesn’t the API look like this?

glStuffThatWillOcclude(START);
RenderOtherStuff();
glStuffThatWillOcclude(END);


beginQuery
RenderMyThing
endQuery

glClear(COLOR | DEPTH | STENCIL)
RenderEntireScene

Originally posted by V-man:
[b] Looking at the ARB and NV version, the examples and documents say to optionally turn off depth and color buffers.

Why?
[/b]
The examples turn off depth and color buffer writes (not testing) during rendering of objects that are “counted” by occlusion query to increase performance. Additional reason might be to prevent modification of both buffers if the content of color or depth buffer is further used during rendering (e.g. if depth only first pass is used as the geometry aginst which the occlusion is tested).

You can reduce amount of cleared buffers in your example by rendering the occluders into the depth buffer only, that way you need to clear only the depth and stencil buffers.

Originally posted by jide:

But it seems that the OQ uses something else as it has to know how many fragments pass the test for each entity you give to it (so it might be more difficult for it to know those ‘per-entity fragments’ passes and fails just because depth-test is just about pixels if I’m not wrong).

There is no need for the GPU to care about entities for the testing purposes. Imho there is only counter that is increased for each fragment that should count in the test and some way to store its value at precisely defined times. Rest is work of the driver.

Originally posted by V-man:

Why doesn’t the API look like this?

The GPU needs to store the depth buffer content somewhere so there would be hidden allocations for the glStuffThatWillOcclude() calls. Additionally it would be less universal than the current solution. You can emulate that behaviour by using FBOs or pbuffers.

So, is it better to render to the depth buffer when doing the OQ first pass ? I was beleiving no just because after the OQ, there’s less objects to draw, so the depth test will have less work to do.

Originally posted by Komat:

You can reduce amount of cleared buffers in your example by rendering the occluders into the depth buffer only, that way you need to clear only the depth and stencil buffers.

So with rendering the major occluders before starting the OQ passes if I understood well.