Multiple color for same vertices

Hello,

i have a large set of triangles (hundreds of thousands)
my goal is to paint the elements by normals, meaning - every “face” of an object will have a different color,
so if it’s a cube we’ll have 6 different colors, 1 per face

the thing is, i want it to be efficient, thats why i use indexed buffering (glDrawElements)
but as far as i understand using indexed buffering is forcing 1 color per vertex.

is there any way to make it work ?
maybe change the drawing calls ? how would you recommend doing so ?

As far as any other need to use multiple information per vertex (such as normals, color and/or texture coordinates), you’ll need to perform what’s called Vertex Split. Which essencially means that for each distinct info you need to have on each vertex, you’ll have to duplicate it to add different information.

for instance:

having a cube, will yield out:

  • 6 faces
  • consisting of 4 vertex per each face
  • which means that you’ll have 24 vertices for that object.

So basically 3 vertices per logical vertex (3 x 8). This way you can have you separate information so that it won’t be shared with its neighbours.

A good way to introduce this in your code/system is to have a process that can have the option to split or not the vertices, this way you can choose how to set up your data.

For data which is per-face rather than per-vertex (e.g. flat-shaded normals or colours), you don’t need one OpenGL vertex for each combination of vertex position and face normal.

If a fragment shader input has the flat qualifier, the value from the last vertex of the primitive will be used for every fragment, rather than interpolating between the primitive’s vertices. The values from the other vertices will be ignored.

So for a flat-shaded cube, you only need 8 vertices. But you need to construct the index array so that each vertex is the last vertex in at most one cube face.

For a mesh made from quads, you might only need as many OpenGL vertices as you have vertex positions. This is definitely achievable for a regular grid (including a cylinder or torus), but for arbitrary meshes the process of allocating vertices to faces is non-trivial. I wouldn’t be surprised if finding an optimal allocation is an NP problem

For a triangle mesh, the number of triangles is asymptotically twice the number of vertex positions, so you’re likely to need twice as many vertices as vertex positions.

Finally: for the specific issue of flat-shading, the vertex data doesn’t need to include normals. You can calculate them either in a geometry shader on the fragment shader. To calculate them in the fragment shader, store the vertex position in the appropriate space (object space, world space, or eye space, but not clip space or NDC) as a vertex shader output. In the fragment shader, you can use

in vec3 position;
...
    vec3 normal = normalize(cross(dFdx(position), dFdy(position)));

dFdx and dFdy give you a pair of vectors which are tangent to the primitive in the same space as position. Their cross product gives you the normal.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.