I’m wondering if someone can point out to me how best to approach the following issue:
- I have a geometry file (200K-500K) and and eye position file. (the files are read only once at the beginning of the run).
- I read this into OpenGL and extract the pixel depth (pixel range from the eye position) and intensity (either by some lighting or texture model).
- Pass these range and intensity values to a functions that does some signatures analysis on them and produces some images.
- Pipe these signature images out to a projector (or file).
- Move to the next eye position and repeat steps 2 through 5.
My questions are: Can I do this in OpenGL?
I don’t need to display any images through OpenGL. I’m just using it to extract pixel range and intensity values. I need it to be fast (real-time, software-in-the loop type of beast) so I guess if I skip rendering to the screen this might help. Am I correct? Do I need to worry about the lighting model if I don’t render to the screen? How about hardware acceleration, will it work? What’s the best way to approach this problem.
One thing you need to understand : OpenGL is for doing graphics (3d graphics). Now you want to extract information about your scene without rendering?
If you want pixel depth, you will need to render. You dont need to perform a swap buffer, but you will need to render somewhere. I’ve seen people use the stencil buffer for this. Each time a pixel is rendered to, you increment stencil value. Your limit may be 255 if you have 8 bit stencil.
//This should do it
glStencilOp(GL_KEEP, GL_INCR, GL_INCR);
Thanks for the reply. As a first take, I don’t want to dispaly anything to the screen. Later on, the application will develop into a visualization tool.
I guess I can still render but not display anything. Right? The reason I ask is that when I cover up my OpenGL window with another (or just minimize it), I get an increase in FPS. So, if I don’t disaply to the screen, things might go faster, and that’s what I want.
Pixels in areas that are occluded by other windows are specified as undefined which means that nothing there will be valid. That includes depth and stencil buffers.
Now, you don’t need to have anything visible, you can draw your stuff to a backbuffer and never make it visible. But you do need a suffiently sized window and viewport (depends on the resolution you want). And you have to make sure it’s not occluded as the hardware is explicitly allowed to exclude these areas from calculations. That also explains your speed increase.
Another issue with your particular project: you need to read hardware buffer contents back into application memory once per frame, right? The speed hit of doing that will make the cost of actually rendering marginal. Fillrates have recently gone through the roof but you’re still lucky (and probably running an expensive workstation card) when you get more than 20MB/s readback speed.