GPGPU - RenderTexture/pbuffer problems

I have been trying to figure out the RenderTexture code at http://www.gpgpu.org/developer/ and http://sourceforge.net/projects/gpgpu/. This RenderTexture is pbuffer meant to work on linux and windows, but doesn’t have a lot of sample code or enough documentation for someone who doesn’t fully understand this stuff…

Prob. Desc:
My problem is that it seems like I should be able to place a rt->BeginCapture() / rt->EndCapture() (from RenderTexture) pair around a glBegin() / glEnd() pair to capture the results as 32 bit floats, because as I understand, as soon as it goes to the framebuffer, it looses precision and is clamped to [0.0,1.0]. If I place glutSolidTorus in instead of glBegin/glEnd, it works just fine (and i’ve been able to get glDrawPixels to work the way I expect)… However, if I use the glBegin/glEnd with a fragment program, then just 1’s are written into the pbuffer. BTW, the BeginCapture essentially calls wglMakeCurrent for WIN32.

For example, I am expecting to be able to capture the results of the following code with a texture ala wglMakeCurrent. One texture would be the input, the second texture would receive the output.

glBegin(GL_QUADS);
glTexCoord2f(0, 0); glVertex3f(-1, -1, -0.5f);
glTexCoord2f(maxS, 0); glVertex3f( 1, -1, -0.5f);
glTexCoord2f(maxS, maxT); glVertex3f( 1, 1, -0.5f);
glTexCoord2f(0, maxT); glVertex3f(-1, 1, -0.5f);
glEnd();

My questions:

  1. Is there something obvious I’m missing? e.g. “you can’t use pbuffers this way!”
  2. I don’t think there is a bug in my code because glutSolidTorus works just fine. There could be a bug in the RenderTexture code, but i think it’s highly unlikely… but obviously it doesn’t work. What kind of mistake would permit glutSolidTorus but not GL_QUADS to work?
  3. is there maybe some other pbuffer code I’m supposed to be using?

Basically what i’m trying to do is some calculations on a GPU using, for example, Cg. The results from Cg would be placed in a texture, so that the texture can be used in either the next round of calculations or sent to the CPU as a final result. I need NPOTD textures and 32 bit floats and no clamping. As I understand, this necessitates pbuffers with system-specific (e.g. glx or wgl) calls. RenderTexture seems to fill this gap nicely, but either I don’t understand how to use it or there is a bug somewhere.

Are there other standard utilities available for this?
Is there a tutorial somewhere (other than the nvidia SDK, which I already know about) that talks about all the ways to use pbuffers?

Sorry for the long post… i’m just trying to be sure that any information you might need to answer my question is provided.

i finally figured out what the problem was, and i thought i’d share the answer for those others who may have the same problem. don’t ask how many hours i spent trying to figure this out.

turns out that the problem had to do with when exactly the BeginCapture was called. I have a block of code that deals with the Cg fragment program I want to run (binding the program and profile, setting the parameters), and apparently, if BeginCapture is called AFTER the Cg, I just get 1’s in the output. If BeginCapture is BEFORE, then it works great.

who knew?

that seems to be the problem with these big giant libraries running around. there are things hidden under the API that you can’t tell just by looking, and at least for this issue, i don’t think i ever saw mention of how important order is in when things get called. oh well, i know now to look for that stuff.

Were you sharing objects (wglShareLists) between your main rendering context and the RenderTexture rendering context? If you are then the order you call those functions shouldn’t matter.