Render-To-VertexBuffer

Hi!

I’m looking for a way to Render to VertexBufferObjects. The information I need is whether a vertex gets clipped during the rendering process or not. Is there a way to get e.g. an index array, which tells me the triangles that are visible in the scene.

For example:

3 triangles:

#1 doesnt get clipped
#2 gets clipped half
#3 gets fully clipped and discarded

the result would be 1,2

Regards
shiBBy2k9

Which version of the OpenGL you can use? Because to solve that task with 3x version is very simply without R2VB

What’s the big diff between RTVB and RTT anyways? The ‘R’ implies a render of some sort, while the ‘VB’ and ‘T’ figure in as interpretations of a result of some kind; in the former you sample a vertex stream or possibly a buffer, in the latter a texture?

But then Compute kicks in and the meaning of ‘R’ is blurred a bit. As it stands, only buffers serve as output targets; but then buffers can be reinterpreted as textures, say via PBO pack/unpack, thus upping the ante and sweetening the pot.

If it smells like a buffer, it’s a buffer; otherwise it’s a texture.

A time honored observation is that there are precious few absolutes in life; the path of least resistance depends a good deal on the application and constraints in question. Hard and fast rules are desirable because they simplify our lives; but they are seldom attainable - perhaps for good reason.

P.S. Not to oversimplify, but why don’t we call buffers and textures arrays?

The modern ways are R2VB (shader model 3+) and Transform Feedback (shader 4+).
R2VB computes stuff in the fragment shader; is done by rendering to a texture, attached to a FBO; and then using a PBO to transfer data to a VBO. On shader4 cards, no data is copied during the PBO operation, the texture is just type-casted to that VBO (in a black-box way).
TF computes stuff in the vtx shader and geom shader, outputs directly to VBO. TF is available only on Shader4+ cards, so no funky limitations and semantics get in the way.
Both approaches cannot easily do exactly what you want: “an index array that tells WHICH triangles…” . Such data-merges are against the stream-processing idea. The closest thing you can get as computation output is “bool IsTriangleVisible[NUM_TRIANGLES]”.
You definitely cannot directly get such output:
"int NumVisibleTris=17; int VisibleTrianglesIndices[MAX_NUM_TRIS]; "

Not to oversimplify, but why don’t we call buffers and textures arrays?

Because they’re not.

Textures are formatted memory. They are not expected or required to be stored linearly as an array. They are formatted specifically along certain requirements for the purposes of fast access within a shader.

Buffers are linear arrays of memory. They are expected and required to be stored contiguously as is. There is no intrinsic format on this memory save whatever you choose to use.

Render to Texture specifically means rendering to a formatted section of memory. It implies that your intent is to use this section of memory via texture access later on.

Render to Buffer specifically means rendering to an unformatted section of memory. It implies that your intent is not to use this section via texture access, outside of buffer textures.

Basically, it’s really simple. If your intent is to use the data as a texture in some process, use RTT. If your intent is to use the data via vertex attributes and such, use RTBO. In 3.x hardware, means using transform feedback; this is faster than rendering to a buffer texture or some such.

On shader4 cards, no data is copied during the PBO operation, the texture is just type-casted to that VBO (in a black-box way).

That seems unlikely. Unless you’re using a buffer texture, the system will need to reformat a buffer object into a texture object. This typically involves swizzling components around for better cache coherency.

If the conversion between texture and buffer object were so simple, buffer textures wouldn’t be limited to 1 dimension.

> Because they’re not.

… yet. Check out the structured buffers in Dx11.

Hardware can handle non-swizzled textures since SM4 and possibly a bit earlier, afaik.

Hardware can handle non-swizzled textures since SM4 and possibly a bit earlier, afaik.

Hardware can handle non-power-of-two textures too. That doesn’t mean you should willy-nilly make every texture an NPOT. There are performance implications of doing these kinds of things.

A texture that gets lousy reading performance just because it was uploaded via PBO is not a good idea. PBOs exist to allow for asynchronous texture uploads and to keep memory pools GL server-side. They should not be used as ad-hoc backing stores of texture memory like that.

Yes, the swizled textures are still present afaik, for the performance concerns you mention. IME, non-swizzled format get only those textures that are initially empty and are used as FBO targets. There I was seeing some noticeable performance penalties on GF8600 . But still, I wasn’t thorough with that benchmark, and just trusted explanations I read on the net; so I may be wrong - can’t really try to reproduce it quickly on a gtx275.

To come back to my topic ^^ A computation like
“bool IsTriangleVisible[NUM_TRIANGLES]”
would be sufficient as well.

I made up an idea:
I put my mesh into the vertexshader, the index of the triangle is stored in some parameter (e.g. color or shader uniform).

–>

Vertex-Shader:
Set gl_ClipVertex to the REAL position of the vertex
Set gl_Position to an encoded array index (*)

–>

Pixel-Shader: Now only the visible verticies should be passed (am I right?).
Set gl_FragColor = vec4(1.0,1.0,1.0,1.0)

The resulting texture should be (bigger but equivalent) to a boolean array with the index-transformation mentioned at (*)

So IsTriangleVisible(index) = (Texture(EncodeIndex(index)) == White)

Do you know if that works?
Regards shiBBy2k9