Inconsistent Occlusion Queries


I am trying to implement a little lens-flare effect. I use occlusion queries to check the occlusion of the sun. For that i first render a sprite into the sky with depth-test set to GL_ALWAYS and immediately after that one with GL_LESS.

Color and depth-writes are disabled.

In the next frame i retrieve the two query results and compute an occlusion factor.

The problem is, that even when i stand still and nothing moves, at all, my lens-flare is flickering. Outputting the occlusion results shows, that they are varying every frame a bit, usually not much, depending on where the camera is located the varying is stronger.

Enabling color writes shows that the two quads are perfectly rendered. No z-fighting or far-plane clipping. Since they don’t write to the depth-buffer, they also cannot disturb each other.

So my question is, whether someone has encountered such a thing before? Any ideas, what could be causing this?

I am using an ATI X1600.

By the way: The flickering also happens when the quads are not occluded at all. And both queries are inconsistent. When there’s no occlusion, at all, i get occlusion factors between 0.9x and 1.x



do you mean - you have different samples passed by those 2 quads, or what? When I used it, I did just as you said (but actually I didn’t change depth test, I simply disabled it and enabled after). Then I’ve got their relation - and it was constant, if sprite wasn’t occluded at all.

Do you have some FSAA or CSAA enabled? But it can’t hurt this, I was using it with FSAA and everything worked great…

Could you please describe it more concretely? I mean - frame i - 100 samples, frame i+1 - 102 samples, frame i+2 - 98 samples… If it is so - it’s quite strange

When i disable the depth-test no fragments are rendered, at all (which follows the spec, afaik), so i set the depth function to GL_ALWAYS.

FSAA or CSAA are not used. The only specialty is, that i am actually rendering this to an FBO, not directly to the framebuffer. This FBO uses, at the moment, a 16 Bit depth-buffer.

Yes, your description with frame i, i+1, etc. is exactly how it looks like. I’ll try to narrow it down tomorrow, but right now i have no clue what could be the reason for this behaviour.


Thermal noise ? Temporal dithering ?

Originally posted by Jan:
When i disable the depth-test no fragments are rendered, at all (which follows the spec, afaik), so i set the depth function to GL_ALWAYS.
Hmmm, not really :slight_smile: When you disable depth-test - it actually mean, that you simple skip it in all the “list of tests” for fragment )) So quite the contrary - when you disable depth-test, all fragments are rendered ))

About the problem - ZBuffer is quite right… I may also offer space-rays, neutrino and slow protons fluids etc…

When you render smth in the same positions with the same conditions, pipeline behavour is the same - that is axiomatic.
So may be you somehow move your sprite point from frame to frame or something else… I think, while narrowing your problem down - you will find a mistake ))

Do you wait for the occlusion query to complete?

I check, whether the query is ready and if so, i update everything. If not, i stop and try next frame again. I don’t issue new queries, if the previous ones are not ready.

I tried it on an nVidia card and the results are perfect. When standing still the values are constant. Also the value for the query, which has depth-test set to GL_LESS, is always <= than the value for the reference query.

I checked: My sprite position is always the same. However, the result of the queries are not necessarily constant. The result of the query with depth-func GL_LESS is even sometimes bigger, than the result for the same sprite rendered with GL_ALWAYS.

Example values:
Query frame i:
Query frame i+1:

From my observations it is always exactly two sets of values. That means the result for frame i+n is always either equal to frame i or frame i+1. However, there seems to be no pattern, in how those values change.

@Jackis: I know you want to help me, and i don’t want to be rude, but i am programming OpenGL for 7 years now, and i know, that the result of the same setup should always be exactly the same. That is why i am so confused! Even more confuses me, that it works perfectly on nVidia hardware. I mean, if my setup would really change and therefore result in this behaviour, then i should see something similar on nVidia, too, no?

Could it simply be, that ATI doesn’t like occlusion queries, while rendering to an FBO? I’ll need to test it without an FBO…


OK, without FBOs it happens, too.

I wrote a test-app: here it is

Would be cool, if some people could test it.

The white quad is queried against the scene (the colored quad). Depending on the occlusion, the quads color is changed.
So, place the camera behind the colored quad and look at the white quad (let it occlude a little).

Then take a look at the console, where the values of the occlusion are printed. The first value is the samples passed WITH depth test, the second WITHOUT depth-test (as reference). On my card the values are changing every frame, even when the scene does not change a bit.

I included the source, so you can make sure, that this test-case is really simple.

Thanks for your help,


I’ve tested your program on my GeForce 8800 GTX, I even wrote simple tool which checks 2 integers in each line for equalness - all the numbers are the same! They are the same every frame of program running (I was moving around quite great, so the camera was not static every frame).

What? Do you mean, you always get exactly the same result, no matter where you stand? No matter whether the quad is visible, at all? In each frame, the same number of pixels passed?

Or do you mean, that the two values are always equal, but depending on the size of the quad, they are different each frame?

In the later case, maybe you were never on the “back” side of the colored quad, so the other one never got occluded.

Ah yeah, I didn’t go to the back side. About equalness - sure, the later case.
When I move back, the values got different, but they never jitter!
I mean - if I stand calm, values don’t change at all.
By the way, the values begin to differ not exactly when quad got occluded - but a little bit begore it. You render something a little bit bigger than this white quad? Ah, may be, you render screen-aligned billboard, and your quad “indicator” is not the same, what you are querying.

Oh, yes you are right about the last thing, i increased my quads, but forgot to also increase the rendered quad. Fixed that.

So, in the end, it works perfectly, on your nVidia card. Someone with an ATI card? Maybe also an X1600 (Mobility)?


Unfortunately, at work we have no ATi cards. Actually, we have only nVidia cards, and our products are focused on it. I had Radeon at home, but I had to switch to nVidia in order to go with our company’ vector :slight_smile:

PS, OFF: I remember 3 years ago we played with ya at UEFA group stage, and got away 2:2 draw with Allemania, and didn’t pass group :frowning: How is it with Allemania now? Is it in Bundesliga (I’m not familiar with Deutch champ)?

No jitter here GeForce 6800 LE.

Works fine on my X1950 Pro.

No problem on mine.

Renderer: GeForce 6800 GT/AGP/SSE2/3DNOW!
Vendor: NVIDIA Corporation
Version: 2.0.3

i get a similar result in my own app (when i updated to 100+nvidia drivers)
( the occlusion results flicker between 0 + ~10000s )
depending on the frame, even though the result_ready == true

what i do is to fade out the lens flare over time ~0.5 sec or so, so visually the flare looks the same even if the occlusion numbers varies

@zed: Yeah, i feared i might have to work around it. Though it seems to be a rare problem. I mean for occlusion culling it’s still fine, but in this case it’s really obvious (not so much in my test-app).

@Jackis: You ask at the wrong time :wink: Allemania was in the top-league (1. Bundesliga) the last year (for the first time after more than 20 years), but now they are back in the 2nd :frowning: