SERIOUS DEFECT: morph targets, second class citizens


Reading the 1.5 specification it suggests <morph> targets are limited to the contents of <vertices> excluding the <polylist> etc. based elements from inclusion in the logical vertex streams.

If this is the case then why even allow <input> with vertex parameter semantics within these elements? But this is very useful, because otherwise models might have to be broken up into chunks, so <geometry> elements would not even resemble recognizable artifacts from the real world.

So what’s a solution? Something like adding a “symbol” or “sid” attribute to the <polylist> like elements would make it possible to easily match these elements up between morph target meshes. Without a similar mechanism the implementation can only count each occurrence of each element in order.

It’s reasonable for the morphs to all have the same number of positions. Without POSITIONS there is nothing to look at. But the other attributes depend on the materials/effects. It’s not true that they cannot be morphed if they are not completely symmetric mesh-wide. They can be as long as their is agreement. The spec does not require <polylist> like elements to include a <vcount> or <p> element. Therefore they are strictly analogous to the <vertices> elements, except that <vertices> does not contain primitives.

Piling as much as possible into a <geometry> is one way to be able to change its materials easily with <bind_material> on a particular instance. So this ability to extend a mesh to the entirety of what is a logical “object” is useful until a future specification includes a new <object> element, or at least permits <bind_material> inside of <instance_node> so that a node and its descendants can act as de facto objects in lieu of alternatives.


EDITED: While I think the above is clearly true. It occurs to me in practice, to efficiently morph, the morphs should be localized. And this implies <geometry> elements that represent piece-wise portions of a model (and many, many of them) and that it’s almost lamentable that every morph target must be its own geometry, as this clutters the library_geometries and the scene-graph in general.

Although more radical, an idea is to use those same symbol or sid attributes to facilitate local morphing of geometry. It only makes sense, since <geometry> uses symbols to apply materials locally, then it should also be able to facilitate local morphing. It would be contradictory to pursue two different design philosophies.

Collada is very incomplete, and its feature set is poor. I hope someone somewhere is working on this. Not only is the feature set poor, but it’s also full of treacherous and disappointing holes. None of this would be so disruptive if it wasn’t the ONLY public format in a position to fulfill its stated role :frowning:


EDITED: While it would be nice to be consistent. It occurred to me that morphs are probably happy with position and lighting normals like components, which most use cases will settle with deriving these from the position data, and so it’s probably not that big of a deal.

(although it’s slightly annoying to calculate lighting data for elements that are not intended to be lit, it’s a relatively small price to pay.)


Morphs should allow blend between any partially or fully define meshes IMO. In VFX for example, they use a lot of fluid simulations, which of course produce meshes with changing topology. How would they export this in COLLDA? Answer - Alembic… This is at least one big reason nobody uses COLLADA in VFX. The other is the XML schema - it takes a lot of space, a lot of memory, a lot of processing, it’s just not suited for the high-end.


I am addressing space, memory, and processing in a overhaul of COLLADA-DOM. It will first be tentatively released as 2.5, and later after everything new comes online, ColladaDOM 3. Anyway, COLLADA is a processing step. It doesn’t need to be high-end for any real reason. But the old COLLADA-DOM is just very liberated let’s say, to the point that it is a bit unseemly. It does need to scale. But COLLADA basically represents art/artifacts, and you don’t use COLLADA with the “runtime”. You use it with software that prepares the one-and-only common format for the needs of the application that’s to make use of the art/artifacts, and that’s an offline step, or something akin to working with editing software.

The real hit on COLLADA I think, is the specification is not really ambitious enough, or well informed, to be applicable to nontrivial use-cases. But there needs to be free/open software before going another round with the specification. I also think it needs vendor extensions, not strictly limited to the EXTRA elements, to form the basis for experimenting with COLLADA’s format. Personally I am content to non-“nontrivial” use cases, because in real-time 3-D at least, there isn’t enough focus on fundamentals, so sticking with COLLADA forces you to either A) come up with a better alternative, or B) accept that real-time 3-D is very primitive, and that your work should reflect that. And this is probably healthy because while you are thinking about things that are important, everyone else will be thinking about all of the usual frivolously stupid things!