Fast read from z-buffer


I’m sure there’s probably a simple (and fast) solution to this, but:

I’ve been trying to determine if a point light source is visible to the viewer. The simplest way to do this seemed to be to obtain the window coordinates of the light source and compare the depth buffer value after rendering all other objects with the z-value of the light source. I’ve been using glReadPixels(…GL_DEPTH_COMPONENT…) to find the z value of a point on the window, but unfortunately, this call kills the frame rate in my application.

Can anybody suggest another (fast) way of doing the light visibility test, or perhaps something that I am doing wrong ?



The best way should be to test wheater the line from the light to the viewer is intersecting any object. Depending on how your objects looks you have to do differenct functions to test if the line passes through it, or you could make a general function that tests if the line passes through a triangle and use it on every triangle of you objects, but that could be a little slow …

Yeah - I was thinking of bounding all my objects with AABB’s, clipping the AABB’s to the view frustum, doing intersection tests with the remaining AABB’s to find which ones the ray intersects with, and then doing the full ray-triangle intersection test for each of those remaining objects.

Still, its a good bit of work when compared with a simple z-buffer read and compare …
I am probably just too lazy



[This message has been edited by Dupre (edited 04-28-2000).]

I know that the depth buffer readback has been significantly accelerated in the yet-to-be-released Detonator 5.xx drivers from nvidia, but this is only useful if you’re using nvidia hardware.

Another approach that may be tolerable is:

render to a much smaller viewport (maybe 64x64 or smaller) where the view is centered on the point light source.

It still may be more efficient to make this determination on the host rather than using the graphics hardware, but doing framebuffer readback is certainly fast to implement.

Rendering to a smaller window still has the problem of getting the info BACK from the buffer, which is notoriously slow. I remember ages ago when standard VGA cards were funky, that someone said "reading was provided only to maintain backwards compatability. Kinda like adding reverse to F1 Cars. :wink: Anyway, I digress…

what you could do is see if the feedback buffer might return the info? Does it do depth culling based on the z buffer?
I suspect not; it likley only does the transofmations and frustum clipping. But it might be worthwhile having a look. (That way you can render your scene, switch to feedback mode, render the light and see what you get back).


If you’ve ever tested this out, you’ll see
that readback from a 64x64 window is
substantially faster than say a
1024x768 window.

Your mileage will vary, but just assuming
all readback will be prohibitively expensive
is a bad idea too.


firstly: my assertion was reading is slow, and that if you can avoid it (by doing a calculation instead), that this would be faster.

secondly: this is a discussion about OPENGL, so saying reading from a 64x64 window is faster than a 1024x768 window is very platform dependent. I have a SGI Onyx2 RealityEngine.

thirdly: i never said to read the WHOLE screen; i agree reading 64x64 PIXELS is faster than reading 768k pixels, because you’re transferring more bytes.

fourthly: i don’t understand how reading 1 pixel from a 64x64 window will be any different from reading 1 pixel from a 128x128
window, unless you have implementation issues with how the memory is mapped.

finally: my assertion was reading from video memory is notoriously slow. It’s been my experience under VGA cards that this was true, it also holds for my SGI. It is therefore best, possibly, to AVOID reading, hence a few little calcs might be easier. I never suggested reading the entire screen just to determine if a SINGLE point was visible. that would be silly.

oh, and another finally. relax. don’t drink as much caffine.