User / Custom Data ideas

I’m starting this topic to hopefully brainstorm some ideas about how to export user defined data from the various packages into collada and how collada should specify where that data comes from.

That means

  • notes and comments attached to objects/nodes/meshes, etc.
  • custom attributes that may or may not have animation
  • blind data

Since it’s a brainstorm (or ブレスト as the Japanese call it) it will probably wander all over.

One problem is probably going to be that the different packages work very differently in this area. I’m currently mostly familar with Maya and so I’m not sure what the other packages can do. Please correct me if I’m wrong. My understanding is that in both Maya and XSI the user can create any arbitrary attribute (float, int, bool, enum, color, vector) and attach it to any node in Maya. You can do this from the UI. No special coding is necessary That node can then generally be animated over time and can be connected or drive any other node in the system either directly or through expressions.

I’m not a Maya expert so in my last exporter when I got a list of attributes of say a DagNode that list had like 180 entries. (a DagNode is matrix in maya under which a mesh might be displayed for example). Since I didn’t want to export all 180 entires my lame solution at the time was to make all my user attributes start with “WW_” so I could easily figure out which attributes were mine.

I suppose another solution would have been to give the exporter some kind of config file that said which attributes I needed exported

That points out a problem for Collada, maybe some team needs all 180 of those standard attributes for which no equivalent exists in another package.

How would Collada solve this? Should it export all 180 always and we’ll just assume that in the large scheme of things there won’t be so much data that it matters?

Maybe under every node in collada there should be an optional <extra> element that is a list of elements of extra data. For example:

<node name="robot">
    <extradata name="maya.drawOverride.overrideTexturing" type="bool">true</extradata>
    <extradata name="maya.renderInfo.indentification" type="short">2</extradata>

That wouldn’t be a generic solution but it would allow me to use Collada even when something wasn’t in the standard and hopefully when and if someone comes up with a specific way to make it standard across platforms it would be a small change to my code to get the data from collada from it’s new standard spot.

As for blind data,

I’m guessing that at a basic level since it’s private data in the first place all you can do in the exporter is enumerate over the list of blind data and export it as CDATA chuncks using the ID info specific to the particular 3D package.

Not knowing Max as well I’m not sure if Max even supports blind data (vs plugin data). The difference is who’s in charge of the data. Blind data the system (the 3D software) is aware of the data and knows how to save/load it. Data that belongs to a plugin has the problem that that plugin is responsible for loading/saving and so a generic exporter would have no way to get access to that data or even know it exists.

Anyway, assuming the package in question does have generic blind data, on top of just exporting it as CDATA the next step above that would be to define (using the array, source, etc stuff you already made) certain types and someway to identify them.

By that I mean for example if there was a way to define bool, int, float, vector, etc. Basically a structure for the data then if some how you knew that some particular blind data was for example a per vertex data you could make it cross product compatible. You could store that it’s per vertex and what it’s structure is so that loading into a different package it would also get inserted per vertex.

I don’t know if anyone off the bat would support that. It could be something like adding another type of blind data (meta blind data) that told Collada that this other blind data is something Collada understands. In otherwords there’s a piece of blind data labled “mygameinfostuff” which of course collada as no idea what that is and so by default it would just save out a CDATA chunk. But, if collada saw a “collada_data_def_001” blind data it could read the contents of that which might say something like


(pick a format for this meta blind data, binary, XML)

the data_spec would point to another meta-blind data that gave the format of “mygameinfostuff”. That’s just so you can share formats across blind datas.

The “mygameinfospec” would also have some kind of collada format something like a structure def, maybe in XML

     <vector3 name="flow_force" />
     <int name="area_type" />
     <bool name="impassible" />

The point would be that if someone wanted to make the blind-data to go across tools they’d somehow get this meta-blind data into the tool so collada would know what to do with it.

Or it could be the blind data has a header that collada expect to see and if it sees the header it can parse the blind data.

Just throwing out ideas.

Again, I see this as a usablility issue. If there is something I can’t do with Collada then I’m forced to roll my own. The defined formats might be overkill. Even if Collada doesn’t have cross tool compatibility for everything if it did export everything from every package I’d at still be able to use it for my projects since most projects generally only use one tool anyway.

I agree with the “usability” issue.

COLLADA has potential, IMHO a lot of potential, and I know that it is a major undertaking to evolve it in such a way that any gaming studio can use it avoiding to write their importers/exporters, but that is the goal of COLLADA.

If we want COLLADA to be one of the tools for next-generation development (of off-line computer graphics projects and for real-tiome games usage) that actually manages to make the developers community life easier then it needs to aim as high as possible.

Thanks to everyone who posts in this forum and helps the format to grow.

A game industry fan,


The primary mechanism that COLLADA currently uses to carry data is the <technique> element. Much of the user’s data is contained (indirectly) within techniques. A technique is a context that embodies a well known description of data that is meaningful to the parent element’s scope.

As a result, the user’s data is directly useable in COLLADA and not so much out-of-band data or meta data… so far. We expect to continue this trend.

The <extra> element is truely out-of-band information as far as the COLLADA schema is concerned. In the 1.0 schema it is allowed as a child element of the <geometry> element only, as we have not yet identified places in the schema where <technique> is insufficient or undesirable.

I don’t believe it is a problem for COLLADA. COLLLADA is a multi-representational schema, again due to the design of the <technique> element. It would be a nice feature for some exporter tools to enable user defined filtering of the data. This is essentially the same thing as making data base queries. COLLADA is a data base schema. Something like Oracle is a database engine that provides the capibiltiies you are talking about.

If everyone wants to put out-of-band data under every <node> element then we can do that. We have been steering users to leverage techniques to describe data as predominantly in-band. It also requires us to all agree on data descriptions during the design phase.

Yes very true and there is some value in allowing users to experiment with COLLADA within the scope of the <extra> element. Such experiments or advanced usages are well contained and therefore more portable.

CDATA is not used in the COLLADA schema, although it can be within the <extra> element.

As for your ideas on “the next step”… This is something that the <technique> element enables you to do within the scopes where it is allowed, for example within the <source> element. The data model within the <source> appears simple at present: homogenous arrays and parameters.

However the <param> element can be a user defined type, one that doesn’t have to be a scalar type. The definition of that type can be supplied in XML Schema language and thereby support structure types that are understood by the XML parser. How well this works to extend COLLADA within a technique (profile) has not been fully explored yet.