i’m trying to build an out-of-core renderer in a linux cluster, where some render-nodes request objects from data-nodes.
These data-nodes use occlusion-culling to determine if the requested model is visible and if positive, they send the data over the network.
To perform the occlusion culling, the render-nodes send their depth-buffer (via glReadPixels()) over to the render-nodes each x frames. The data-nodes then write back the the buffer via glDrawPixels()
For debugging purpose, i rendered the scaled depth-buffer-values to a full-screen quad.
Now i noticed, the depth-values from the render-nodes are far more intense than the depth-values i get by directly rendering in the data-nodes. I haven’t acquired yet a factor, but i’d guess it’s at least 1/2 of the render-nodes’ intensity.
And i’m not referring to the values written back into the data-nodes. I mean values which stem from direct rendering the same objects as in the render-nodes.
Because of this weakness, many occlusion-tests fail.
Right now, I’m using a single computer to emulate the other nodes. Different hardware seems to be out of the question, here.
I’m using Kubuntu Linux (32 and 64 all the same) with driver nVidia 180.44.