Can someone please point out how vertex animation can be set up in 1.5?
Note that this is true per-vertex XYX animation, not skinning or morphing … each vertex moves independently each frame — raw facial motion capture data. It can be a lot of data, but that’s what disk drives are for. Doesn’t seem like it should be very hard to specify.
Vertex animation was not added to 1.4, or later 1.5, as there was no design consensus within the work group.
You can animate mesh data however you like using an effect written in any of the supported shader languages (GLSL, GLES, Cg, CgFX, DX depending on COLLADA version). Using this approach, it’s a matter of connecting the parameters of the vertex program that does the computation to the animation channel outputs. The vertex position data would be in the key frame and the channel target would be the appropriate shader parameters.
Thanks for the reply. However, the desire is to specify the changing geometry of the mesh over time. There is no requirement/relationship for rendering; the rendering is an independent issue and generally is not set up at all and usually won’t be done using Cg/GLES/etc methods ultimately at all.
Instead the requirement is for the animation’s “channel” to be able to target a specific vertex in the output of the geometry/mesh/source/float_array, the affiliated accessor, or affiliated vertices elements.
Alternatively, multiple mesh snapshots could be supplied along with some method to select among them based on the current time.
I think these approaches amount to pretty much the same thing as there will be blocks of vertex position data stored either as <mesh> elements or <animation> elements (i.e. as key frames).
Since the “output of the mesh”, or result of assembling a <mesh> into an application object, is not represented in the COLLADA document, isn’t possible to target that result.
This is were the <vertices> element has some importance as it implicitly represents the identity of each vertex (by POSITION) in the mesh. You can target through this element to the input sources with the assurance that they contain all of the mesh’s vertex attributes in an implicit order (start counting with zero) . What you can also do is add other <input> streams to the mesh <vertices>, like TEXCOORD etc., or with some new semantic (e.g. INDEX), that provides the attributes that your application needs to animate beyond POSITION.