Hello, I have vertex data and face indices which I’ve read from a file. I’d like to get the object texture-mapped but texture coordinates are not available in the file. I know that texture coordinates range from (0 to 1). The number of vertices are more than a million. Shall I start from minimum (x,y) and find the maximum (x,y) and then map the texture coordinates proportionately? Is this the right way?
The only “right” way is for the file to have texture coordinates.
Generating texture coordinates from vertex positions (whether manually or via glTexGen) will produce a result, but there’s no reason to believe that it will be the correct result, or even useful. In particular, for a closed surface, a parallel projection will result in each point on the texture being mapped to at least two points on the surface (one on a front face, one on a back face). Similarly, there will be points where the scale factor is infinite (i.e. where the surface is tangential to the projection direction).
But given how straightforward it is to implement your suggestion, maybe you should just try it to get an idea of the issues involved.
If the texture is a “texture” rather than a “map” (where specific points on the texture need to be mapped to specific points on the object), you may be able to get reasonable results by figuring out how to map the texture to an object with the same topology (formally, an object which is homeomorphic to your object) then adjusting the mapping to e.g. minimise scale variation or maximise conformality.
If you aren’t trying to map a specific texture to the object, but simply parameterise its surface so that a suitable texture can subsequently be created, then the above approach will work. That’s basically what the “unwrap” feature in a modelling package does. The main complication is that you typically need to choose texture seams so that the mapping to the plane is injective.
Otherwise, there probably isn’t any alternative to manual intervention in generating the texture mapping (i.e. loading the model into a program such as Maya or Blender and using that to generate texture coordinates).
Thank you, GClements. But I’ve one question, the other softwares like Maya etc. definitely generate texture coordinates from vertex coordinates, isn’t it? What’s the underlying principle of generating texture coordinates in those cases?
Vertex coordinates are a factor in automatic unwrapping, which is part of the process of generating texture coordinates. The other parts of the process are marking texture seams (you can’t create an injective mapping from a closed surface to a surface with a boundary) and pinning vertices (forcing specific vertices to have specific texture coordinates, which is the primary means by which the unwrapping process is influenced), both of which require user input.
Having done those, unwrapping is essentially an optimisation problem: choosing texture coordinates so as to minimise distortion. Ideally, unwrapping should preserve the lengths of edges and the angles between edges. But it’s impossible to preserve these quantities exactly unless the surface is “developable”, and most surfaces aren’t. Typically, preservation of angles (conformality) is prioritised over preservation of edge lengths, as this results in local similarity (any small region of the texture visually resembles a view of the corresponding portion of the object).
For a simple example of why you cannot preserve angles exactly, consider trying to unwrap one corner of a cube, where 3 square faces meet at a vertex. The angles between the edges are all 90 degrees, so they sum to 270 degrees. But however you map that region to a plane, the sum of the angles at a vertex will always sum to 360 degrees; i.e. you can’t create a mapping where the angles are all 90 degrees.
There isn’t a single ‘underlying principle’ to assigning texture coordinates to a 3D object. Packages like Maya and modo give the user many choices as to how to do that. A given model coming out of Maya could have infinitely different sets of texture coordinates assigned to vertices. Images can be mapped onto meshes as geometric projections or with wrapping. Textures can also be algorithmically (sp?) generated, where there is no image involved. If you’re thinking about getting into 3D graphics, you might consider learning a 3D package. That way you could generate the models needed for you OpenGL sims. I believe Blender is available at no charge.