Let’s say, I want to duplicate an object and scale it and position it somewhere else. I guess I could do the lighting calculation in the geometry shader. That would probably have the same problem as doing lighting in the vertex shader.

If I want to do per pixel lighting, I guess I have to pass some MORE varyings to the fragment shader? If I want to pass the normal and light_vector and half_vector to the FS do I need
varying vec3 normal1;
varying vec3 normal2;
varying vec3 light_vector1;
varying vec3 light_vector2;

but that doesn’t make sense.

There is this example

which writes to gl_Position 2 times.

``````
for(i=0; i< gl_VerticesIn; i++){
gl_Position = gl_PositionIn[i];
EmitVertex();
}
EndPrimitive();
//New piece of geometry!  We just swizzle the x and y terms
for(i=0; i< gl_VerticesIn; i++){
gl_Position = gl_PositionIn[i];
gl_Position.xy = gl_Position.yx;
EmitVertex();
}
eNdPrimitive();

``````

Do I have to write to the same varying 2 times?

Here
gl_Position = gl_PositionIn[i];gl_Position.xy = gl_Position.yx;
actually can/should be written simply
gl_Position = gl_PositionIn[i].yxzw;

Simply have your GShader write to gl_Normal and i.e gl_TexCoord0. Or hope that varyings without semantics will be properly resolved (in the case that you forget to use/compute some varying)

I guess this will be your code:

``````
#define light_vector   gl_TexCoord0
#define light_vectorIn gl_TexCoord0In

uniform mat4 mvp1; // MVP matrix for first object
uniform mat4 mvp2; // MVP matrix for second object
uniform mat3 rotmat1; // normal-matrix for first object
uniform mat3 rotmat2;

for(i=0; i< 3; i++){
gl_Position = mvp1 * gl_PositionIn[i];
gl_Normal = rotmat1 * gl_NormalIn[i];
light_vector = light_vectorIn[i]; // this will need to be actually computed, not just copied
EmitVertex();
}
EndPrimitive();
//New piece of geometry!  We just swizzle the x and y terms
for(i=0; i< 3; i++){
gl_Position = mvp2 * gl_PositionIn[i];
gl_Normal = rotmat2 * gl_NormalIn[i];
light_vector = light_vectorIn[i]; // also compute..
EmitVertex();
}
EndPrimitive();

``````

Note that if you do the transformations in the vtx shader, you’ll be destroying data (or well, making it harder and less precise to compute for the second copy of the object). So, you offload transformations in the geom shader.

[btw I’ve never compiled even a hello-world geom shader yet]

gl_Normal doesn’t exist.
I think I need to do
varying in bla bla bla (coming from VS)
varying out bla bla bla (going out to FS)

Some of the computation can be done in VS and I guess some, I’ll have to move to GS.
I don’t understand what the point of VS shader is now that we have GS.
Some official guidelines® would be nice.

Maybe because of the computed-vertices cache :). I wonder whether when the geometry-shader is disabled or null, gpus directly send all varyings to the rasterizer. Otherwise, I fear that maybe the driver assigns a default/auto-generated GShader, that wastes cycles in copying data around. I guess a quick simple benchmark can answer that… I can’t do it yet ^^’

Yes, I was thinking that caching is impossible in the GS because we can create random trangles. I can understand why they want to have hull shader and topology shader since these would be essentially an improved form of GS and at the same time, they don’t want to make GS obsolete since it is too new so GS will continue to exist for a few generations.

Now that I think about it, I guess I am forced to use
#define light_vector gl_TexCoord0
in the vertex and geometry shader. I can’t have my own varying. I searched the web but information is scarce.

I though we are discouraged from using the built in stuff (gl_TexCoord0, gl_ModelviewMatrix, gl_Vertex)

Am I wrong or what?

Let’s say, I want to duplicate an object and scale it and position it somewhere else. I guess I could do the lighting calculation in the geometry shader. That would probably have the same problem as doing lighting in the vertex shader.

I would ask why you would want to do this in the geometry shader. You’re talking about what is essentially an optimization. So the first thing you should do is benchmark it to see if it is actually faster (which I doubt).

It will take some time to get it done and then I’ll see. MRT rendering to 3 or 6 cubemap faces at the same time seems interesting among other things.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.