which is Best 3D model file to learn and use.

Hi! every body .I just started to learn Blender and I want to design the video game teheran in Blender and load it with GLUT.So,please give me the information about which is best 3d model file that I have to choose to export in blender and easy to load and highly used.
Thank you!

For beginners, I would suggest WaveFront OBJ. It is supported by most 3d authoring packages, and you can find many sample models on the web.

OBJ files are plain text, so they are relatively easy to debug, although you must write a parser to read these files. But that may be easier than parsing binary formats such as .3ds (not recommended) as they can be pretty complex. For your OBJ parser you can decide for yourself what data you need, and only implement those features.

You can probably find OBJ parsing code samples on the web as well.

I hate the OBJ format. I’ll let you decide on what you want to use.


Can you provide some clues why ? I think it has a good balance between features and complexity.

OBJ because is very simple to load/use/debug. Almost all 3d packages support OBJ format. It has all the basic stuff you need: Position, normal, Texcoord, material information. If you need something more complex or something with animation create your own format. I wouldn’t recomend you to use COLLADA, FBX etc.

Thanks everyone!I have ask this question in other forum also .Many friends gave suggestion to use OBJ format,so I decided to start learning OBJ format.

Have you looked into the Open Asset Import Library? The loader is fine for cross-platform ie linux windows etc but the viewer is windows specific. If you are importing 3D models into your own GLUT code then it is a good cross platform choice.

OBJ because is very simple to load/use/debug.

Except that it’s not, because OBJ uses multiple indices, which is not allowed by OpenGL and D3D. So, every time you load a mesh, you have to rebuild it with single indices. Which is a big pain, and not something that most new users know how to do.

Writing a OBJ importer is quite time consuming. It is text based which is the biggest problem. Furthermore, there are some differences between what software you use to export. I found one that would write a o at tge start and a o at the end if the object definition.

I found that dumping indices and vertices to my own binary file is far more easier. Eventually, it went through some evolution of course.

The biggest downside to obj is probably what Alfonse said.

I think the problem of Dinesh is (was?) to find a format that is both easy to read with his own code, and easy to export with his 3D modeling software.

There’s not that many match. The old quake formats (md2, md3) were simple enough, but it’s probably hard to find an exporter for blender 2.60. There’s an exporter for the more recent MD5, but it’s text based and slightly more complex.

There’s also the “inter-quake model format” (which comes with a blender 2.60 exporter); I’ve never used it so I don’t know if it’s complex or not.


Same thing happens with the DAE (collada) format and it’s incredibly annoying. I made a DAE importer for my project as a means to get some complex geometry in it instead of boring precedural geometry and the multi indexing is a pain.

I either end up with a bloated mesh or if I collapse the data it can take forever to do so with bigger meshes.

Definitely just going to plan on writting an exporter myself from the 3D package I’m using and dump the data into so mock up binary format.

I either end up with a bloated mesh or if I collapse the data it can take forever to do so with bigger meshes.

I know Blender’s collada exporter is pretty terrible; it doesn’t even do folding within a single attribute. This means that for every vertex of every triangle, there is a separate color and texture coordinate attribute. Even if every color is the same.

The converter I wrote is pretty reasonably fast, even with 10,000 vertex meshes. Besides, how often do you run your converter?

Definitely just going to plan on writting an exporter myself from the 3D package I’m using and dump the data into so mock up binary format.

I don’t imagine it would take any less time to do this from the exporter than from a command-line tool. If performance is a problem, putting the conversion code in the exporter isn’t going to make it take less time. Especially if that exporter is a scripting language.

Currently the way I’m using the converter is obviously “wrong”, as I’m converting on the fly on each execution. The reason is because I am currently testing heavily changing 3D data from the 3D package. Sometimes the amount of vertices to load is quite heavy, but I’m afraid I don’t have a complete solution for the tool flow (yet).

For example, some heavy meshes (100k vertices+) can take a couple minutes to collapse. I’m sure my collapsing algorythm isn’t the most optimized thing in existance though. If I run the converter in debug mode, it’s quite hilarious; because I use std::vector (ofcourse pre allocating) and the Visual C++ debug runtime does out of bounds checks and other stuff that makes the CRT really slow the same model could take over an hour to collapse.

Loading and just using the bloated data works at reasonable speeds; it just bothers me somewhat.

I was hoping I could fetch the internal data from the 3D package through a better indexing mechanism. Perhaps it would serve the full vertex data in one go, which should make the process somewhat faster. But yes, I haven’t really checked the available script interface so I have no real idea if this would improve performance.

The main reason I haven’t done this yet is because I did not wish to tie myself to a specific 3D package until I had a working framework, so a more general 3D file format support was reasonable.

I like the .3ds format
takes the pain out of writing your own parser