I’m going crazy trying to export meshes with texture coordinates support from XSI… What happens is that texture projection(s) (coordinates) are well exported as source arrays, but the polygon clusters (or mesh subsets if you prefer) do not refer to the sources… only POSITION and NORMAL sources are referenced…
I’ve tried all sort of things under XSI : freeze texture projections, freeze operators stack, use all possible mapping techniques… nothing works .
The only time I have successfully exported UVs from XSI is when I have imported a file from another Modeling Tool with FBX export/import…
Is there a “hidden manipulation” I could have not seen under XSI ? Or is it an “exporter problem” ?
Hope I’m not posting wrong actually. If you’re using the .net sdk and SematicLayer and wanna import the texture coordinates you’ll have to use the CSLShape_35 shape class instead of the CSLBaseShape/CSLShape. Thus you’ll have access to the texture coordinate arrays. Again, this is when using the .xsi sdk.
In fact I’m not using any SDK : I only use the XML exported file to import models/animations in my engine. And according to the COLLADA specifications, “used” sources must be referenced with the “input” tag in the “polygons” definition… but with an XSI exported file this is not always the case…
And to clarify a little bit : I’m doing a generic COLLADA importer, not a XSI specific importer
Hey just wondering if you’ve ever had problems using the collada viewer when viewing texture mapped models exported from XSI. For some model of mine the UV coordinates seem upside down(into their viewer-probably a LH/RH issue) and as well the normal maps do not seem connected to anything but still the diffuse. Is Collada XSI I/E team aware of this issue?Or is it just not in the Collada file format specs?
Sorry if that’s a little late as an answer. You need to have a texture that uses the texture coordinates in order to have them being used as a source in the mesh. That’s the way the COLLADA format works for now.
Thank you jean claude for the info. Currently I’m writing an animation exporter for xsi and I was wondering if you/anyone could provide me some general info about the way skeletal animation works into xsi.
Envelope, that’s the object being deformed. In other words, that’s the skin.
Skeleton, those are the objects deforming the envelope, aka bones. Usually mistaken with IK chains, but is a completely orthogonal concept.
Base pose, this is the transformation for the envelope and the skeleton at the moment of envelope assignment.
To deform each vertex of the envelope, you need to transform it in the skeleton space using the base pose transformation, then in global space using the current skeleton transformation, then weight it with the rest of the skeleton nodes.
In Collada we export the inverse bind pose (aka base pose) of the skeletons, but we don’t have a similar concept for the envelope, instead there is a bind shape. In our case, we transform the original shape by the base pose transform and output it as the bind shape (but this is not working yet in the current exporter, so it will only work in the cases where your base pose is the identity matrix).