Usefulness

Great work. I’m glad someone is taking the initiative to make this happen.

I’m not sure how other developers work but I’m used to specifying lots of game related and conversion related information inside my 3D editor (maya, max, lightwave, softimage)

My impressions is the current standard of Collada doesn’t address these types of issues.

For example I don’t force my artists to collapse models into one model before exporting even though i might want them to be one model at runtime. Forcing the artist to collapse them means they are no longer editable. They could keep two versions of the files but that means the possibility of not remembering which file is newest. It also means a manual collapsing activity everytime they export.

Instead I have them mark where in the heirarchy a model starts and optionally where it ends, everything inbetween in that part of the hierarchy gets turned into one model by my tools

Another example is I don’t have them remove stuff they don’t need. Instead I have them mark what they do need and ignore the rest. They often need lights, constraint objects, etc, in their scene. Asking them to remove that stuff everytime is asking for trouble.

While it would be possible to write my own collada exporter to add this data it seems like the point should be to allow me to avoid having to write my own. Otherwise nothing is gained and I might as well use my own format since i’m going to have to write the backend anyway to use the data.

A standard that defined how to export all this extra data (comments, notes, blind data, sets, that is attached to 3D data in a typical scene in max, maya etc) would mean that I could actually use Collada as my middle format

Not just a way, but a defined standard that says which pieces of extra data in max/maya/xsi correspond to which extra pieces in the collada file. Comments, blind data, user attributes, etc all need to get exported from all packages in the same way or at least as close as possible.

I guess, at least for me, I need Collada to not only define a file format but to specify, at least to some extent, how things get exported from the various packages. Otherwise I’ll just end up having to write my own.

For example, exporting everything, even hidden and frozen items. My artists often freeze or hide stuff that they still want exported. My exporters export everything the aritst marks for export. To use collada I’d need the exporters to either do the same (not feasable since it would most likely be game specific) or else to export EVERYTHING (including hidden and frozen objects) and let me choose how to use the info.

Also,

I know animation is not covered yet but for me it’s not very useful without it. I know that’s harsh but it’s true. Why would I make my work flow support collada only to have to make some hack for animation and then re-write it when collada has animation added. I only say that to emphasise the importance of getting the animation spec finished.

But, with that in mind there is often data I need there as well. I need not just the animation on models, I need the animation on user parameters, materials, etc,. My artists for example make a list of animations and frames (walk is frame 1 to 20, run is frames 25 to 40) for example. They even go so far as to set, for a particular animation, which bones need data. So for example to blend a guy waving his arm anim with a guy running anim I need to know not to blend the leg positions (in fact for compression not even to save them).

Again, I don’t need that data saved out as that. What I need is for Collada to specifiy how Maya, Max, XSI etc export the extra data so that on reading the Collada file into my own tools I can dig that info out and figure out what I really want to do with all the data.

Note, that includes animation of user types. If I make float field in Maya called “translumosity” and set animation on it I need that field definition and data to make it down to the collada file.

Comments?

Thanks a lot for this feedback. This is exactly why we need from the community to be able to make COLLADA evolve in the right direction.

While it would be possible to write my own collada exporter to add this data it seems like the point should be to allow me to avoid having to write my own. Otherwise nothing is gained and I might as well use my own format since i’m going to have to write the backend anyway to use the data.

I do not understand why you think adding a feature to COLLADA would require as much work as creating your own format, exporter and loader ?
Since we are using XML we are relying on open source tools to parse the file ourselves, and we are constantly changing the format while testing our evolutions of COLLADA without habing to start from scratch each time?

Not just a way, but a defined standard that says which pieces of extra data in max/maya/xsi correspond to which extra pieces in the collada file. Comments, blind data, user attributes, etc all need to get exported from all packages in the same way or at least as close as possible.

That’s right. We want to have this done for our next release. The major issue is to get everybody to agree on how to export/import such data the same way.
What would be very useful for us, is if you would like to spend some time to put a straw man proposal, and maybe a prototype export/import for how you would like the extra data to be available in COLLADA. You do not have to think about how every tool would implement it, just to think about how you would like it.

Note, that includes animation of user types. If I make float field in Maya called “translumosity” and set animation on it I need that field definition and data to make it down to the collada file.

COLLADA 1.0 does not address animations. This is something we are working on for next release. This is a good time to make proposals regarding the animation system within COLLADA as well.

Thanks for your help.

My point is unless collada exports everything I need then if I modifiy it to do so, everytime a new version comes out I have it re-insert my mods on every different exporter. Rather than do that I’d be more likely to just write my own and not have to worry about it.

If on the other hand collada did export everything I needed including all that extra data then maybe there would be no reason to write my own.

I will point out that I know some companies that would say a problem with XML and text based formats is they are too slow for million or 100 million polygon levels. I see their point. I suppose one suggestion there would be to allow storing the vertex data, color data, weight data, UV data and other large datasets as CDATA in well defined formats or optionally as filenames of binary data.

As for the animation data I would personally want to see the data baked on export. That’s not to say collada shouldn’t also include function curves for those teams that want them but I’m of the camp that I’d prefer to let the 3D tool bake the data and then I’ll curve fit it later. That means if collada only exported function curves I’d be out of luck and again have to write my own.

Just as a little background information, we’ve wrote our own XML exporter in 1999 and we’re still using it. We didn’t bother with schema for validation because our conversion tools would detect weird data anyway, we kept things as simple as possible.

We’ve been very happy with XML as our intermediate format, we’ve redesigned the format a few times over the years to fit changing requirements and to reduce the file size. The files are still big, often in the 60-110MB range, and several of these are joined by our conversion tools to build a level.

Still, conversion times are not limited by the current XML file sizes, other conversion tasks take up much more time. But if the next generation consoles bring 10x to 50x the triangle count we may have to change our pipeline to keep conversion times down.

We wrote our own, very simple, XML parser to keep parsing quick and we designed the XML format such that we can do it all in one pass.

I agree with greggman here in that the benefit Collada could give us is:

  1. We don’t have to write exporters / importers anymore
  2. We gain flexibility because it is supported by several tools

But, if I have to modify the exporters/importers anyway to add special features to the exporter, I may as well design my own data format and implement my own exporter/importer for whatever tools we use.

As for animation we currently support exporting both the actual keys and baked animation data. Exporting just keys is problematic in a few situations, e.g. when animators apply IK since IK’ed bones do not get keys. Because of this we almost always use baked animation data and have our tools do their thing (curve fitting, compression, etc.). Baked data is nice too because you can use any weird plug-in to modify the animation and you’ll still get the proper result.

We also use note tracks along with animation data to mark foot-steps and other fun triggers in animations.

Roar

Thanks for those comments.
This is very important as we are currently designing animation support in COLLADA.

The files are still big, often in the 60-110MB range, and several of these are joined by our conversion tools to build a level.

I just found this site, and it seems very cool. I haven’t read the spec yet, but I have to ask something before I forget. On a cursory scan this doesn’t appear to be part of it…

Has any thought been given to specifying blocks of numerical data by referencing a chunk in a data file (ala fxcomposer)? That could help out a lot for the file sizes if that becomes an issue.

Ok… I hope that wasn’t completely pointless :), off to read the spec…

EDIT: Ok, this question was already asked, my bad… reading away…

Adruab

Yes it has :wink:

COLLADA uses URI to refer to external resources and that enables fairly fine grained referencing of data including document fragments (binary chunks even) and SQL queries.

Can you clearify this statement?

Are you saying that any collada standards conforming library out there that will load externally referenced binary formats and do SQL queries? From the spec I do not get that impression. In HTML you can reference a link a in <a href=“http://somesite.com/someC++madeapp.exe”> but that does not mean HTML supports C++.

My impression is your statement is misleading. I saw no standard for binary formats in the spec or SQL queries therefore collada does not support these. Please clearify if I’m wrong.

Sure, I said we are using URI technology to reference external resources. This is an enabling technology that satisifies the arguments being made here in regards to accessing external binaries.

No I didn’t say that. I am saying COLLADA implementors can create tools that have those capabilities because we have chosen XML and URI technologies for the COLLADA schema. As COLLADA is its early stages as a project, all of the partners can expound on the specification and implement features that they need in their tool chains that comform to the specification.

I think your example mixes the functionality of the HTML schema with some implementation of a web browser that may or may not understand how to handle a URL to an executable binary. HTML is not the web browser, nor is COLLADA a run-time library or application. I think if you can separate those two things in your mind, my statements will be more understandable.

I’m not here to say “You are wrong”. I’m here to design an XML schema that will enable vendors to implement a normalized tool chain for the benefit of the video game industry and related media industries such as motion pictures. Your contributions to that effort are encouraged and appreciated.

The collaborative aspect of COLLADA extends beyond schema design, reference implementation, and importers and exporters. The specification itself is open to more authors then just myself and my peers at SCEA. You are welcome to submit clarifications to the specification and ammendments that you think need to be covered. My capacity in that regard is Editor in Chief. :wink:

I agree with the goals I posted in another thread. Basically that collada should first and foremost be about getting data from any 3D tool into a game.

It only approaches that goal if by using a collada format reading library I can load any collada file and access the data inside in a consistant manner.

I don’t understand the statements that collada can support external URIs to binary data and SQL queries if you happen to implement them in your own pipeline. Is there a difference between that statement and saying I can reference any file in Maya in my pipeline by putting the comment “file=myexternalfile.myformat” as an attached attribute on a node in Maya and then adjusting my pipeline to look at those comments.

Which goals are you actually trying to achieve? The goal of being useful (ie, being able to load any collada file and reference specific data in a consistant way) and the goal of being flexible (ie, being ability to insert the data any way you want) are not in agreement.

Thank you Gregg… and I appreciate your attention in this area of COLLADA development. We have several engineers at various companies around the world implementing exporters and importers and viewers, but not nearly as many take interest in the technical documents.

I look forward to your contributions to the specification!

Hey all,

Just spend the morning going through all the documentation I could find on COLLADA. It’s a nice design, quite similar in a couple of ways to our current format but with lots of design improvement. We’ve been discussing moving to an XML based format ourselves, so it’s very interesting to encounter plans for a universal format.

Reading through Gregg’s comments did spark some concerns about the actual usefullness of DCC-package developers creating export and import tools.

A large part of content creation is indeed defining what and how you export your content. “Baking” or “Flattening” the complex scene structures and animation set-ups of maya is simply a necessary step of optimalization.

Just as Gregg described our own Maya set-up is full of flags like “collapse this hierarchy to one mesh”, “bake the animation of these bones”, “convert the hierarchical animation of these objects to a single skinned mesh”, etc.

These things are often quite specific for how the engine is constructed, our engine is much quicker dealing with very large skinned meshes then it is with dealing with 50 objects with their own transformations. (most engines are, state changes are bad).
Ofcourse we can start defining these “baking” preferences, but I expect it’ll be a lot harder to agree on such workflow related matters then it is to agree on the specifactions of a mesh description.

In practical terms I fear it might probably come down to taking a look once at the source code of the DCC-dev’s exporter/importer and then re-writing it to suit the needs of the game-dev’s own personal little quirks and habits. Each revised version of the COLLADA specs would require a possible revision of the dev’s code or run the risc of getting out of sync with the COLLADA standard and losing the described benefits.
To allow this process to be as easy as possible there is not only a dependency on the COLLADA format but also on agreement on how the DCC-dev’s set up their source code and libraries. Otherwise maintaining a plug-in for Max and Maya will be a potential nighmare.

We wrote an importer for our own data format once. We used two tools, an in-house CSG-based editor and maya. At a certain moment we migrated to maya for all content and we needed to import editor-created data. The export format of a CSG-editor is ofcourse just a big polymesh, there is no CSG information in there anymore, so all that stuff was lost in maya. And vice versa importing a maya polymesh into an editor is equally useless.
The loss of construction history when importing is even more crippling when dealing with animation though. “baked” bone animation has no relevenance to the original data in maya which is most likely created with a very complex animation rig.
An importer in my experience is often limited to the functions of a one-way content migration tool where you have to except loss of construction history, a way of shifting around static content (like simple mesh structures) or a last resort tool to ressurect content of which the original source file was corrupted or lost.

Jan-Bart van Beek | Guerrilla Games | Killzone | Lead Artist

Assuming Collada is a “middle” format all I need to continue my current pipeline and still use collada is that collada exports 100% of the data in maya. That way I can read through my notes, blind data, plugin data and custom attributes that I currently use and use that data to decide how to collapse things into something for my game engine.

So, in other words, we don’t need to agree on all “baking” preferences as long as we can get consistant data from all packages. The typical pipeline would be

DCC TOOL (DCC Provided exporter) -> Collada File - > Custom Conversion Tool -> Game Ready File

In your custom conversion tool you’d read options you had either embedded in the original DCC scene or from a separate file and then for example collapse multiple objects to one object.

My concern with baking as mostly for animation. Specifically (1) funciton curves do not cover many cases of animation so if all I get is function curves my artists will be limited in what they can do with collada (2) every package computes stuff differently, exporting baked animation data means I let the package do its voodoo math and I get the result, that result is guarnteed to reproduce the same image in my pipeline. If instead collada just lets each package store their fcurves and does not provide baked data then I’ll end up having to make my pipeline check each file for the original DCC tool and attempt to do its calculations the same as that specific tool as in


if (tool = 3dsmax5_0)
{
   CalcStuffLike3DSMax5_0();
}
else if (tool == 3dsmax5_1;
{
  CalcStuffLike3DSMax5_1();
}
else if (tool = Maya5_0)
{
  CalcStuffLikeMaya5_0()
}

etc… Asking for baked data means I never have to worry about either of these cases.

As for construction history and the like, Collada is pushing to be a source format as well as a common format meaning that each DCC vendor is supposed to define their own custom profiles to store all the same data they currently store in their own native files as well as storing the as much as possible in the common profile. Their custom area means you could load the file back into that DCC tool and get the exact same thing back (with construction history etc)

So far no DCC has taken this goal seriously but unfortunately it is REQUIRED in order for collada to be useful. Why? Specifically because as you said, every team uses different features. There is no way collada’s common bits will cover 100% of every DCCs tool’s data (where’s the Painter Effects part or the Metaballs part, data from custom plugins, etc etc…) But, teams do use that data. Collada becomes useful both by providing a way to get most data “the common part” from all packages but still providing a way to get all that unique data that each team needs. Only by being a source format will the unique data make it out.

Hopefully the DCC vendors will understand this and we’ll see that reflected in their exporters.