While I’m not new to OpenGL, this is the first time I’ve wanted to use OpenGL for something other than displaying data on the screen.
My goal is to generate a profile of 3D terrain-like data from a given position, and then read back the 3D positions of the visible pixels.
While I’m familiar with CUDA and that approach to GPGPU programming, it seemed to me that OpenGL already did the vast majority of the math for me in this case, so it made sense to use it.
So, I create a 2-channel float texture which simply ranges between 0 and 1 in both directions; I can map that to a 2D coordinate in the elevation map data to find an XYZ point.
What I’d like to do at a high level is to call glFlush() once all the data is arranged into triangle strips, and then somehow get back a 2D array of 2-channel floats which I can manipulate on the host like any array.
The “somehow” is the problem. There doesn’t seem to be any easy way to do this—unlike the dead-simple cudaMemcpy function which would accomplish the same thing.
I’ve been investigating pbuffers and framebufferObjects and whatnot, but none of the classes I’ve found which claim to make using those easy comes with a function to actually return the contents of the buffer----leading me to believe I’m missing something fundamental.
This is all in C++, incidentally. Although I’m willing to throw some Cg into the mix if necessary.
That example is certainly useful, but it’s not the whole story. I’ve heard OpenGL won’t initialize if it doesn’t have some render context…in that case, there’s still a
glutCreateWindow(“Simple Framebuffer Object”);
call, which creates a render context.
I’m looking for an example which uses offscreen rendering without ever creating a window on-screen. Know any of those?
Assume I’m a newbie. I’m not, quite, but there are some annoying gaps in my knowledge which crop up at unexpected times.
You actually need to create a window to get a rendering context before using OpenGL. Then you can create your offscreen framebuffer, and discard the original window.