I’d like the vertex shader to calculate something for me. The number of output vertices is the same as the number of input vertices, but the number of input vertices can vary, is not constant. I’d need a variable array in the shader, but there is none. Nevertheless, maybe there is some tweak?
What do you mean by a “variable array?” Are you trying to have a single execution of a vertex shader operate on multiple vertices at once? If so, that’s essentially not possible (not without platform-specific extensions).
However, if all you want is for a vertex shader to know how many vertices will be processed by it, then you can tell it. You obviously know what parameters you’re going to call glDraw* with, so you know how many vertices will be sent to it. So you can just make a uniform with that number in it.
I had this idea in mind:
uniform mat4 matrix;
in vec3 in_vertex;
int i = 0;
vertices[i] = matrix * in_vertex;
gl_Position = vertices[i++];
the uniform variable i would be reset to zero before each rendering. After rendering, I would then pull the calced_ vertices out of the shader program, however, I must select a definite size for calced_vertices array, I cannot leave it as I did above. So I would need to declare an array of constant size, say
A variable array, on the other hand, can be of any size, it grows as necessary. This is my dillema, I would need a variable array. Basically, I would like to retrieve the projected vertices after each rendering. Of course, I could project in my application also, but why do the same work twice? I dont know the number of vertices to project in advance.
Maybe I could write the projected vertices into a BO or texture?
I have an idea: I could make a FBO, attach an 1D texture to it, then render the projected vertices into it. Would this work?
Yes, that would work, or simpler you could use transform feedback to just capture the results of the vertex shader. It sounds like all you want to do is transform vertices and store them off.
Isnt the transform feedback a NVIDIA extension. It’s not ARB. How widely is it supported?
Isnt the transform feedback a NVIDIA extension.
No. GL 3.0 and above provide transform feedback as a core feature. And according to the OpenGL Extensions Viewer database, the EXT version of transform feedback is supported all the way back to GeForce FX’s and Radeon 9500+'s.
If the calculation is not related to graphics, you can use texture to pass the values, do the calculation in fragment/pixel shader, write the results into frame buffer, then read them back.
Thanks to all who replied! Can you now point me to some schemes for encoding GLSL types into textures? I’d like to know some ideas on how to put a float into a texture or an int.
I’d like to know some ideas on how to put a float into a texture or an int.
To put a float in a texture, you make a floating-point texture. To put integers in a texture, you make an integer texture. It’s all a matter of the image format that you use.
Great! So, you can put, say, a normal map into a floating point texture?
So, you can put, say, a normal map into a floating point texture?
Yes. But why you would want to, I don’t know. I’d use an RG8_SNORM myself, and generate the 3rd component in the shader.