Saving output from vertex shader into a buffer obj

Dear forum users,
I just wanted to use the vertex processing stage to do some processing and save some values into a vertex buffer object or texture.
In this case I don’t want to use the fragment shader functionality.
I just want to get the output so that I can latter copy it back to host or do further processing at the GPU.
(this would be shader model 3 GPU, nvidia 7000, I don’t have a GeForce 8000 yet)

I looked in the Orange book and search the internet for hours, but found nothing specific on this topic.
I’m sure that this is not a new question, but the answer is not evident to me.

Can anyone point me to some code examples on how to do this.
It would be much appreciated.

I guess you can not do this without the NV transform feedback extension, currently available on GF 8800 only.

Why not use the fragment shader? Do you use an older GPU?

A standard Shader Model 3 solution is to use fragment shading,
output values into the framebuffer, read pixels into a buffer object bound as a pixel pack buffer, then bind this object as an array buffer and render.

If the fragment shader is not flexible enough in shader model 3.0 (indexed uniform variables or similar), it is possible to write points into a multiple render target. The limit are 16 floats per vertex shader (the fragment shader writes the varyings to gl_FragData[0…3]) The position have to replaced by texture coordinates.

Well I guess I’ll have to go with what’s available in shader model 3, since I want to do it in my current pc.

The reason that I wanted to use the vertex shader was because the processing I want to do is per mesh vertex only.
It is really straight forward to submit the vertex and face data to the vertex shader. My objective was to take advantage of that.
The result of that operation should be 3 new meshes, which I would like to save to some intermediate format (such as a VBO).
After that I would resubmit those vertexes to a different vertex shader for further processing.

Actually come to think about it, it makes more sense to use the fragment shader, since it has higher parallelism than the vertex shader. (more pixel pipes that vertex pipes)

On the other hand I could use both vertex and fragment shaders. In that situation I would get even more processing power. But I can imagine that it would be complicated to split tasks.

This thematic kind of fits in the middle between graphics processing and gpgpu,or better gpgpu for graphics.

Anyway, apparently the answer is to use the fragment shader and use a texture with the vertex data, and write the output of that processing to the frame buffer. This frame buffer should be binded to another texture. Right?

It really depends on what you’re planning to do with them. If you process them in a second pass, right, bind a texture to the framebuffer and use it in a subsequent pass.
If you want to render the vertex data, read the pixel data into a buffer object do your draw calls.
There is an article in GLU Gems 2 that might be of interest to you(Adaptive tesselation of subdivision surfaces with displacement mapping).

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.