Vertex Vs Geometry Shaders

I’m just messing about with 2D stuff at the moment, but I was wanting to do some point sprites in the geometry shader. I want to use a sprites position, size, scale, and rotation in the geometry shader to generate the textured triangle pair needed to display the sprite.

Now, my delima is that my ortho-like matrix in my vertex shader modifies my sprite vertex values to be within screen values (-1,1) and then we go through the geometry shader. This means that my geometry shader is going to have to generate everything using those values…

Now, I could just hold the projection matrix multiplication till the geometry shader, but then my vertex shader isn’t doing anything…

So, does anyone have any thoughts or insight as to what I can/should do? Are geometry shaders slower than the same in vertex? What is the point in vertex shaders if all the same can be done in the geometry shader? Why didn’t they just expand or replace vertex shaders with geometry ones?

It just seems like if I do do projection transformations in the vertex shader, I will have to do further ones on my sprite data in the geometry shader to match, or do further transformation on the sprite data in the vertex shader (though the exact details escape me) and pass it to the geometry shader… when I could just do the projection after everything else in the geometry shader and save the trouble… What am I forgetting?

The vtx shader produces output that GPUs can/do cache. The geom shader will fetch vtx results from that cache.
Generally the load-balancing is a non-issue if you’ll be rendering less than 300k primitives IME. So, go for whatever is easy to implement now; and you can later (very easily) optimize it for your usage-scenario, by manually tuning.

Ok then!

Took your advice and went ahead and put it in the geometry shader. It’s all working just fine, just kinda empty on my vertex shader… :stuck_out_tongue: