goodbye to everyone!
This is my first post, so surely I’m going to do something wrong.
I looked at some Wavefront .obj files and I noticed that not always the number of vertex normals (when written) correspond to the number of vertex. Why this? Should not it be bijective, that every vertex normal has to match (one-to-one) a vertex?
Could be face normals. Or a normal palette.
From this: Wavefront .obj file, it looks like it could be the latter. See Vertex normal indices.
Thank you Dark for your reply
Vertices are uniquely identified by the “v” declarator, such as:
v 1.658534 0.685535 -1.790268
…and vertex normals are uniquely identified by the “vn” declarator, such as:
vn 0.353553 0.853553 0.382683
this excludes facet normals or other attributes.
In some Wavefront .obj file datas, freely delivered with OpenGL coding examples, I found this:
galleon.obj (Spanish Galleon - Copyright 1992 Viewpoint Animation Engineering Inc.)
2372 vertices (v)
3377 normal to vertex (vn)
Cow.obj (Cow - Copyright 1992 Viewpoint Animation Engineering Inc.)
2903 vertices (v)
3267 normal to vertex (vn)
Those are just a few cases in which the number of vertices in the model’s vertex array do not correspond to the number of vertex normals in the model’s normal to vertex array … and I’m confused about this discrepancy.
Those are just a few cases in which the number of vertices in the model’s vertex array do not correspond to the number of vertex normals in the model’s normal to vertex array … and I’m confused about this discrepancy.[/QUOTE]
It occurs for a vertex which belongs to faces separated by a sharp edge.
Similarly, a vertex which lies on a texture seam will have multiple sets of texture coordinates for a single vertex position.
So in OpenGL, you’ll have to duplicate those vertices when such cases are happening.
If you use the built-in vertex pulling (recommended), yes.
However, you can implement custom vertex pulling in your shader if you want. With that, you can structure your data any way you want.
You can easily envision why the first can be more efficient than the second. Everything’s pipelined to push the right values to the shader at the right times, whereas with the latter the GPU has to wait until some shader calcs occur before it even knows what to pull in.
Interesting. Will have a look at that. Thank you.
Thank you GClements
what you wrote makes perfect sense.