A few questions about Render to Memory techniques

Hi,

the purpose here is to write some code that will directly render an OpenGL scene into a preallocated memory somewhere in my RAM.
Actually, I do a kind of :

Byte MyBuffer[512*512];
DrawScene();
glReadPixels(MyBuffer);

What I actually expect to do is :

Byte MyBuffer[512*512];
DrawScene(MyBuffer);

I was told that FBO could be the solution, but wasn’t able to find out how to render directly to memory instead of an OpenGL texture.
Also, since I own an old Video Card that doesnt support FBO extension, I am only trying to understand how I can manage to do so before I decide whether I buy a new video Card or not.

Then, here are my few questions :

  • Can FBO render directly to a memory buffer (and not to some buffer on the video card)

  • I saw a few other methods there that sound like they also offer a way to do such. But since it’s an old post (1997), I suppose that from that time technics have evolved. Do you then know if some of them managed to meet developers expectations or do you know some other technics that I could use there?

  • If I decide to use GPU features (eg CUDA programming) to render faster some OpenGL scene, but then want to copy it into memory, will I encounter a serious performance downgrade because of the memory bus speed? Is this something quite usual and do you know some tutoriels about doing such?

  • Ultimately, since my main goal is to go from DrawScene()" to some memory target as quickly as possible : is Rendering to Memory a real upgrade in performances, or is the current way (Draw + glReadPixel) the best and fastest way?

  • And what I think is the most tricky of my questions : assumed it is possible to render directly into memory, could I also consider rendering directly to some opened stream (like some network pipe or stdout)?

  • No. You should pass texture down to system memory. The most effective way to do it - via PBO. There are good tutorials on how to do that fast.

  • That’s like a first question. There is no option to get rendered image right on CPU, you should readback it somehow. Thing of GPU as of some dedicated server.

  • CUDA is not designed to make OpenGL scene render faster. It improves massive GPGPU calculations. But in CUDA there is very good readback/upload routines, which can be copied to make your readback as fast as possible.

For now (and for future, I hope), there is no way to render to anything else then GPU memory.

Thanks a lot for your help.
Your answers didn’t match what I expected, but thus I can look at the right way.

Unless you are very short on GPU memory, I don’t see why you would want rendering directly to CPU ram. If that were possible, the CPU ram would have a much higher latency than the GPU VRAM and thus your rendering would be much slower. By rendering on the GPU and doing a ReadPixel, the transfer is done once only in bulk over the PCI-E bus.

Actually, I don’t want to render on the computer processing the scene, but instead send the “texture” scene directly to a remote computer that will use this texture for its own computations.

Since I didn’t need to display my scene, I hoped I could save some CPU time by computing it directly into memory to save a massive Bytes copy. But what I actually don’t know is whether that would really save time or not… According to what I know about OpenGL pipeline, I am not sure this would upgrade performance.

The ideal situation would have been to render directly in some (eg) TCP stream. But I think I am talking about science-fiction now.