How are they implemented in 3D Modeling program?

Hi, I am trying to make a 3D Modeling program. I have been trying to display mesh wireframes as quad wireframe by eliminating the diagonal edge between 2 triangulated face but i think i am doing it wrong. I am doing the edge drawing by:

Reusing the VBO’s and using GL_PRIMITIVE_RESTART when i encounter a triangulated face and using it with the Face Indicies as the EBO.

Is this how Autodesk Maya do it? Or do they use a geometry shader? From what ive researched geometry shaders takes a toll on graphics refresh rate. If I use a geometry shader when you have a large number of faces and wireframes your program becomes laggy.

I have been using Maya since its 2.5 release I believe it was still OpenGL Legacy so it makes me think that the drawing routines of Quads and triangles are different. When was Geometry shader been implemented what OpenGL version is it?

As an example of the modeling display that i want to recreate look at FIGURO web app.

Can you render quad wireframes of your mesh without using geometry shader or simply with fragment shader?

Ive been stuck with this problem for 5 months now so please help.

I’d assume that they keep the original faces and just build separate element (index) arrays for GL_TRIANGLES and GL_LINES.

Geometry shaders are from ARB_geometry_shader4, which was added to core in OpenGL 3.2.

Provided that you have the indices for the original quads, yes. If you’re trying to “undo” triangulation, that’s a hard problem. It’s not necessarily possible to recover the original quads because there can be more than one quad mesh which produces a given set of triangles.

In any case, I wouldn’t use a geometry shader to convert triangles to lines because that’s going to duplicate lines. The topology typically isn’t changing constantly, so you’re probably better off processing the mesh in the application on import.

Okay thanks, just to make sure Autodesk Maya dont use geometry shaders to display mesh as quads right?

Now why is it that most obj importer libs dont preserve vertex indicies and face indicies? if i am gonna make quad lines out of a mesh i need to preserve the face indicies assuming the mesh are all quads. Libs like Assimp add new verticies and thus new face indicies out of the original mesh. Does that mean i have to create my own obj importer for this to work?

Thanks again.

Adding complexity to the question, how do they display subdivided quad mesh with bent lines. I mean lines with curvature?

I am thinking adding points in between lines but how is it that those lines like modo looks very smooth when it is subdivided.

I guess creating a 3d modeling app is not a very easy task.

I have no idea how they do it. But it doesn’t make much sense to use a geometry shader for this.

Probably because OBJ allows a separate index for the position, normal and texture coordinates, whereas OpenGL (and DirectX) don’t have that level of indirection (you can implement that in a vertex shader, but dependent fetches have a performance cost so you wouldn’t normally do that). So usually the importer maps each unique combination of indices to a single index. This typically results in vertices being numbered in the order in which they appear in the “f” lines in the OBJ file. Using the position index as the vertex index would only be possible for a file which only has positions or where each position index is always used with a specific normal index and a specific texture coordinate index (i.e. no sharp edges or texture seams).

Possibly. First, check whether Maya is exporting as quads or triangles. If it’s using triangles, check whether there’s an option to change that. OBJ itself isn’t restricted to triangles; faces can have any number of vertices. If the OBJ file contains quads, check that you aren’t triangulating on import (aiProcess_Triangulate flag). Keeping the mesh as quads is going to be more robust than triangulating then trying to undo that.

If the grid lines are curved, it’s probably some kind of algebraic surface (Bézier, NURBS) rather than a mesh. You can subdivide the lines in the application based upon the equation for the surface, or use tessellation shaders (OpenGL 4.0 or ARB_tessellation_shader), or evaluators (glEvalMesh etc) in legacy OpenGL. All of these require knowing the equation for the surface, and that information is lost if you export it as a polygon mesh (OBJ does support NURBS surfaces via the surf command).

That depends upon which features you want. Something on the scale of Maya or Blender is far beyond what an individual can achieve. When a code base gets that large, simply keeping up with changes to the hardware, OS, libraries, toolchain etc becomes a full-time job.