Improving the SDK

ooo, goody. Rob’s here =)

The standard starting progression in almost every tutorial set I’ve seen is empty window > triangle > colored triangle, and then they tend to diverge to differing effects depending on the tutorial set. I figure those three tutorials are must-haves. After that, it’s (like so much in this project right now) up for debate.

Regarding the math header, one problem is that stack-based objects cannot be returned by functions. This means that returning a value through a parameter:


GLMvec2f in(0, 0), out;
foo(&in, &out);

can be quite a bit faster than using a return value:


GLMvec2f in(0, 0), *out;
out = foo(&in);
free(out);  // since the object is malloced.

On a different note, using structs can be conceptually cleaner than arrays, especially if they provide accessors, i.e. out.X, out.Y.

Regarding docbook: that would be great, indeed! I’ve been looking for a way to generate OpenGL docs for OpenTK/Tao for a long time - if I may ask, Mike how do you generate the PyOpenGL docs?

Edit:

GLFW is a C++ library, so I think we should prefer GLUT over it, if possible.

GLFW is a C library, not C++. It has excellent documentation, is extremely intuitive (unlike, ugh, GLUT), plus it has one of the cleanest code-bases I have ever seen.

I think we should strive to get the first SDK release to be quite something simple and minimal. We could write tutorials for the basic OpenGL use, for example, how to use VBOs and FBOs, and go from here to more advanced stuff in future SDK releases. The goal is to get something working, up and running in relative short period of time (some months).

Also, we should strive to reuse existing GL libraries as much as possible, but nonetheless strive to include only native OpenGL libraries that are in plain C. GLUT would be good candidate, while SDL not because it does not have “the look and feel” of OpenGL and contains much stuff we don’t perhaps need.

Despite it being uglier, I think pointers with the “void glmVecAdd2f(GLMvec2f out, const GLMvec2f v1, const GLMvec2f v2)” is the better solution. That way, we can pass the computed vectors directly to OpenGL without worrying about struct alignment issues (every compiler on the market now probably would pack the structs fine, but I’d rather be safe than confuse a poor newbie using a weird compiler)

EDIT:
Upon closer inspection, GLFW is, in fact, C. I looked in the FAQ for language stuff and read the beginning of the “Why another toolkit” section, and had one of those “I hate my language” moments on the last sentence of the first paragraph there. Negatives are always weird in english when combined with conjuntions.

So okay. Should we for this approach number 1:


typedef GLfloat GLMvec2f[2];
typedef GLfloat GLMmat4f[16];

void glmVecSet2f(GLMvec2f out, GLfloat x, GLfloat y);
void glmVecAdd2f(GLMvec2f out, const GLMve2f v1, const GLMvec2f v2);

GLMvec2f a, b, c;
glmVecSet2f(a, 1.0f, 1.0f);
glmVecSet2f(b, 2.0f, 2.0f);
glmVecAdd2f(c, a, b);

GLMmat4f mat;
glmMatIdentity4f(mat);
glLoadMatrixf(mat);

Or the approac number 2:


typedef struct { GLfloat x, y; } GLMvec2f;
typedef struct { GLfloat m[16]; } GLMmat4f;

GLMvec2f *glmVecSet2f(GLMvec2f *out, GLfloat x, GLfloat y);
GLMvec2f *glmVecAdd2f(GLMvec2f *out, const GLMve2f *v1, const GLMvec2f *v2);

GLMvec2f a, b, c;
glmVecAdd2f(&c, glmVecSet2f(&a, 1.0f, 1.0f),  glmVecSet2f(&b, 2.0f, 2.0f);

a.x = 1.0f;
a.y = 2.0f;

GLMmat4f mat;
glmMatIdentity4f(&mat);
glLoadMatrixf(mat.m);

As a sidenote, they used approach number 2 in DirectX mathlib.

There are pros and cons. The struct approac allows functions to be chained, but array approach allows data to be passed for OpenGL functions directly.

Edit: The use of structs may also be more streamlined if we are going to make other libraries as well (which may need complex data structures as structs).

Edit: My personal choice would be the struct way.

Pack alignment is probably a non-issue (plus, most compilers provide extensions to control it, if necessary). On the other hand, structs have the advantage of being cleaner in normal use:


vector.X = -vector.Y;
vs
vector[0] = -vector[1];

The latter actually looks like an array of vectors, not the 1st and 2nd element of a vector!

Besides, you can always use the union trick to alias X, Y to [0], [1] in a struct - but you cannot do this the other way round in an array.

Structs are probably best. And as long as we can keep them tightly packed, we can just use a cast to pass a struct as an OpenGL vector.

And another possible approach (to avoid the cast):


typedef struct GLMvec2f_t
{
    union
    {
        GLfloat Data[2];
        GLfloat X, Y;
    };
} GLMvec2f;

GLMvec2f vector;

glUniform2fv(location, 1, vector.Data);

Edit: We should also avoid anonymous structs, as these might cause name collisions in large projects…

This is a good idea, althought I would use lowercase letter naming convention.

Stephen: We’re not going to be using immediate mode (its deprecated), so a cast wouldn’t happen very often. Either option should work fine.

This is a good idea, althought I would use lowercase letter naming convention.

Ah, slip of the finger (been coding too much C# at work) :wink:

Stephen: We’re not going to be using immediate mode (its deprecated), so a cast wouldn’t happen very often. Either option should work fine.

Of course, glVertex was only an example. Updated the example to use uniforms.

Should we then decide to use unions for matrices as well? And should we use which memory layout for them? Column-major or row-major?

I have updated the glmath.h to reflect what we have discussed.

DocBook to HTML was originally done with a huge pre-processing step that turned the DocBook into one big DocBook book and then used the DocBook-XSL transformation to turn it into XHTML pages. That worked, and had lots of formatting and other options (nice looking output). It took > 2hrs for each production run, however, so I wound up letting it get out of date.

I’m just finishing up reworking that to use straight lxml.etree in Python to load the individual pages and dump them. Code for the transformation is here:

https://code.launchpad.net/~mcfletch/pyopengl/directdocs

That can produce the docs in 30s or so, so fast enough that you can play with the format.

You could strip 90% of the code, as it’s doing Python introspection to annotate the docs, the basic transformation of docbook-to-xhtml is done in the .kid template (it’s pretty trivial).

Beyond that it’s just a few xpath queries to find the various elements and use those to create cross-references and the like. The code is a bit rough (was really just intended for internal use), but it should get you started… assuming we get 3.x docbook some time :slight_smile: .

Whichever layout OpenGL uses, so we can just pass the matrices to the GL as uniforms.

EDIT: Just looked at the spec, it’s column-major.

We have been discussing the license terms with Paladin and how come up with a conclusion that zlib may be the best choice. Does anyone have recomendations?

Edit: What kind of build system should we use?

Thanks, Mike. I’m skimming through the code - this helps a lot!

The final plan for the C# bindings is to provide inline documentation to OpenGL functions (just hover the mouse over the function and see what it does). Still a long way off, but who says you can’t dream? :slight_smile:

Another thing we should discuss is which compilers are we going to support. Obviously, we’ll need to support MSVC, GCC, [the one Apple is using] in both x86 and x86_64 configurations. It would also be nice to support MingW and Intel’s compiler, but what about more obscure ones like Digital Mars and Borland C and MingW x64?

Edit: Another good choice would be the MIT/X11 license, which is supposed to be compatible with just about everything (including closed-source, BSD and GPL licenses).

Regarding the build system… that’s a big can of worms.

Is it reasonable to expect users to install extra software just for building? If yes, CMake (http://www.cmake.org/) is the cleanest system I’ve ever used.

If not, maybe we should go with native options (e.g. MSBuild for Windows, make/autoconf for Linux and XCode for MacOS) - but how the heck do you keep these in sync?

If it runs in GCC x86 and x86_64, it should run in mingw. There’s very little difference (Aside from things like the side of long on 64-bit).

Apple uses a custom version of GCC 4.X (4.1, I think).

As for buildystem, I’m a fan of cmake. It can generate project files for quite a few IDEs, as well as makefiles for GNU make and nmake. We can pre-generate those project files for packaged distributions, and only keep the cmake files in SVN.

If it runs in GCC x86 and x86_64, it should run in mingw. There’s very little difference (Aside from things like the side of long on 64-bit).

Problem is that should is a long way from does :slight_smile: (compilers have bugs, too). We’ll need to test anyway, especially optimized builds.

Apple uses a custom version of GCC 4.X (4.1, I think).

Thanks, didn’t know that.

As for buildystem, I’m a fan of cmake. It can generate project files for quite a few IDEs. We can pre-generate those project files for packaged distributions, and only keep the cmake files in SVN.

This sounds ideal!

Why such choice for math library? Do you want it to be cross-language?

The math library draft looks nice for me as it is intuitive and recall the Opengl style.

Honestly, I’d prefer GLSL style for math.

I think we should strive to get the first SDK release to be quite something simple and minimal. We could write tutorials for the basic OpenGL use, for example, how to use VBOs and FBOs, and go from here to more advanced stuff in future SDK releases.

I was about to say the same. Put too much weight to the project and you’ll never get it off the ground and ARB will be laughing that we promised so much and delivered nothing (no offense) :wink:
Keep your heads cool guys.

Cmake is at www.cmake.org, so anyone who hasn’t used it can review it.

What about integrating the helper libraries? Should we dynmically link? statically link? just include the source file in the list of files for each tutorial?