render to buffer object

Hi

Would it be technical possible(in theory, not with present opengl) to render to vertex buffer object so that every new fragment is set to the next position. I mean like append it on list, not set it to the fragment position in the framebuffer. So I could filter in the fragment stage according to the fragment position but would get every fragment which would pass this stage.

regards

marco

This is not really possible using current hardware (shaders with a variable number of outputs). You can achieve similar results by rendering to one large buffer and then compacting the buffer (see Daniel Horn’s chapter in GPU Gems 2).
http://developer.nvidia.com/object/gpu_gems_2_home.html

In next generation hardware this kind of thing should be possible using geometry shaders and stream output:
http://download.microsoft.com/download/9…07_WinHEC05.ppt

Not sure I understand exactly what you’re trying to do, but it sounds like plain render to vertex buffer is what you need. ATI supports that in D3D at the moment, but not yet in OpenGL. Basically it allows you to render to a texture as usual (it will be in a linear non-swizzled format), then bind the memory that texture represents as a vertex buffer for rendering.

Ok, I want render a selection:

  1. Render the model
  2. Use a Stencil Mask as the Selection Mask
  3. Get ALL Vertices in a VBO, no overrendering

So I would like to render to a Buffer Object and save every Fragment which pass the fragment test stages.

Check http://oss.sgi.com/projects/ogl-sample/registry/ARB/pixel_buffer_object.txt

There is an example on ‘Render to vertex array’.

Originally posted by Gong:
[b]Check http://oss.sgi.com/projects/ogl-sample/registry/ARB/pixel_buffer_object.txt

There is an example on ‘Render to vertex array’.[/b]
I know PBOs but what isn’t what I need.

Originally posted by marco:
So I would like to render to a Buffer Object and save every Fragment which pass the fragment test stages.
So you want it tightly packed? Well, you can’t achieve that on current hardware, but if you clear the render target first, then for fragments that failed all vertices will be zero, and triangles rendered with that will thus be zero area and discarded. So they will go away, but will of course still require processing in the vertex shader.

Originally posted by Humus:
[quote]Originally posted by marco:
So I would like to render to a Buffer Object and save every Fragment which pass the fragment test stages.
So you want it tightly packed? Well, you can’t achieve that on current hardware, but if you clear the render target first, then for fragments that failed all vertices will be zero, and triangles rendered with that will thus be zero area and discarded. So they will go away, but will of course still require processing in the vertex shader.
[/QUOTE]But what about overdraw? The only thing I can imaging is to render every vertex in a new pixel and use that you mean. The only problem are the test. How can I implement a z-buffer test in this case? Will the the scheme I ask for implemented in the next generation?

Well, it’s kinda hard to give suggestions as I don’t quite see the practical use you’re planning to make of this. What effect are you trying to implement?

What you want seems to be a bit out of what the hardware can handle. Perhaps something closer to what you want can be implemented in DX10 timeframe with stream-out and other features that’s being added.

Originally posted by Humus:
[b]Well, it’s kinda hard to give suggestions as I don’t quite see the practical use you’re planning to make of this. What effect are you trying to implement?

What you want seems to be a bit out of what the hardware can handle. Perhaps something closer to what you want can be implemented in DX10 timeframe with stream-out and other features that’s being added.[/b]
So, I use vertex shader textures und ID buffer. My way to get the selected vertices is to render there IDs as points to the framebuffer. I use . scissor, stencil and depth tests to get away with the unwanted points. So the drawback is the overdraw because I use the big number of points. To emulate this in CPU is a bit hard(I’m lasy :slight_smile: because I use really complex vertex shader.

regards

marco

PS is there hardware support for glMinMax under ATI?