Transform Feedback in Fragment Shader

Hi there,

I currently have a vertex shader and a fragment shader setup to skin a skeletal model then store data to a gBuffer.

I’m wondering if it’s possible in my fragment shader to still output to my gBuffer textures, as well as, a transform feedback buffer. So that I don’t have to reskin my models in subsequent passes.

Alternatively, I could just have a vertex shader that skins models to a transform feedback buffer and do that beforehand. But it seems like it would be more efficient without that extra step.

Also, I’m targeting OpenGL 3.0 or higher if that affects things

Thanks in advance for your help!

See GL_RASTERIZER_DISCARD, and just don’t enable it.

However, be sure that you really net a perf gain by saving off the skinned model. For how many frames will you be able to reuse it when your skeletal model is animating? Does this help you reduce the worse case performance of your app or just add complexity?

Also you may find that full-up vertex shader has considerably more going on than a simple vertex shader needed to perform the minimal processing to transform and write out the vertex attributes (e.g. computing and preparing all of the inputs needed for fragment work; things that you don’t want to save off via transform feedback for one reason or another).

So I ended up just making a “skinning” shader that stores my skeletal meshes in the same format as my “static” mesh. On one hand, I think this is convenient because now I use the same shader for all my meshes to create my gbuffer which means there’s hopefully less slow down due to communication between the CPU and the GPU. But the thing that has me second guessing this design is having to manage the buffer the skinned data outputs to. Allocating enough space and potentially reallocating and managing freed up space. I think it could be worth it but it adds a level of complexity I’m not sure I’m willing to accept yet.

I haven’t compared performance yet because I’m wary of opening up the can of worms of managing that buffer. The idea I had behind this wasn’t to reuse animation frames, but rather to skin the model once per frame, then reuse that skinned data in all of my shadow mapping passes.