GLM (Opengl Math) identity?

Is there a function to load the identity matrix in the GLM library? I can’t figure it out. I’m trying to use core GL3 functionality so I’m trying to build an orthographic projection matrix but I should set the matrix to identity first.

I guess this will work

mProjectionMatrix = glm::mat4();

It would be nice if the glm website had a way to search it :\

glm::mat4() and glm::mat4(1.0) build identity matrices.

Try tvmet I think it is much better.

Try tvmet I think it is much better.

Off topic, but please stop recommending this library. At least, not without explaining why you think so.

First, it’s documentation is pretty lacking. Considering how relatively unorthodox the library is, (GLM at least has the GLSL spec to state how it works), it needs much better documentation. He reformatted the Doxygen output into what has to be the single worst documentation format I’ve ever seen (and it’s not like the default Doxygen is great or anything). Looking at the function list makes me want to claw my eyes out; it’s impossible to quickly tell where one function begins and another ends. It’s hard to find anything unless you know what you’re looking for, and it’s hard to read anything even when you find a function that does what you need.

Second, it’s not very convenient to work with. GLM is much simpler for doing the kinds of stuff that graphics work needs. glm::mat4 is a lot shorter than tvmet::mat<float, 4, 4>. Yes, you can wrap that in a typedef, but there’s no need with GLM.

However fast it might be, for graphics work, it isn’t particularly useful. If you’re serious about CPU performance, you will need to use SSE instructions directly, which tvmet does not do. And if you’re not that interested in CPU performance, you should use the most convenient library for the job. Which is likely to be GLM.

I second that.

Alfonse, how can you state generally that an arbitrary compiler is not going to use SSE instructions? Gcc on 64-bit platforms uses them by default. I get very good perf with tvmet, it eliminates the creation of temporary objects completely which is critical to me. So what if glm is simpler, perf is also important. Does glm use SSE?

Further, what if you compile for a non-x86 platform (such as mobile phone, or multimedia tablet). There SSE is generally not available and the only thing you can get for sure is the elimination of temporaries. So tvmet is generally a safer bet for perf and it is very portable. So what if the docs aren’t that good, the library produces fast code and works very well. Also, it is header-only, hence no need to compile/link it for beginners.

Lastly, the boost project itself recommends tvmet. So if you use boost, tvmet is the library to use along with it.

Visual C++ allows use SSE instructions by default on 64 bits platforms however I guess Alfonse had in mind hand written SSE code which can provide more performance compared to compiler optimizations when written carefully.

GLM has a good part of its features implemented with SSE code with some intrinsic looking methods but so far this code is not exposed by the API. This is expected in future release actually.

However, with a practical vision and following the always relevant 20-80 rule, 20% of the code consume 80% of the performance. Being obsess with performance everywhere is for me a waste of time. To really reach interesting performance on these 20% of code chances are that a dedicated global method will be required that would prevent such type of general libraries to bring any level of relevance.

From my point of view tvmet looks to have interesting part of design and I especially like the STL looking part of it before it’s a known convention. Let’s remember that the STL haven’t been design for performance and the std::list is quite a demonstration of that. However, I think tvmet fall to the classic custom convention out of nowhere pattern, a pattern unfortunately shared by many math library which turns out a bit messy considering that a part of conventions are based on the STL.

The documentation is not that great and it’s quite hard to browse the code has is use a lot of preprocessor code to generate more code. I do like this preprocessor use but we better have a great documentation in this case! I made an experiment. I tried to find how to inverse a matrix, a quite common operation in graphics I believe but I have been unsuccessful. I didn’t spend more that 1 minutes to such… but should I spend more than 1 min to such? The documentation of GLM is pretty terrible as well but following the idea to follow existing convention, any user could resolve this issue by thinking: “How doesn’t it work with GLSL? Erm, there is the function ‘inverse’ let’s try that” and it works.

Anyway as far as a library get the job done the way it fits us, that’s a good choice.

There is no built-in inverse in tvmet AFAIK, things like that are not it’s purpose, the ease of work with matrices and performance is. If you want a matrix inverse, the way to do it with tvmet is like this:

go to
http://www.euclideanspace.com/

find the formula, copy-paste into tvmet and there you have it. The idea is, that there are different algorithms for matrix inversion, some better than others in different situations and at least the easy determinant formulas should be familiar to everyone. Here for example is the 4x4 matrix inversion function:

//////////////////////////////////////////////////////////////////////////////
template <class T>
void inversea(Matrix<T, 4, 4> const& ma, Matrix<T, 4, 4>& mb)
{
mb(0, 0) =
ma(1, 2) * ma(2, 3) * ma(3, 1) - ma(1, 3) * ma(2, 2) * ma(3, 1) +
ma(1, 3) * ma(2, 1) * ma(3, 2) - ma(1, 1) * ma(2, 3) * ma(3, 2) -
ma(1, 2) * ma(2, 1) * ma(3, 3) + ma(1, 1) * ma(2, 2) * ma(3, 3);
mb(0, 1) =
ma(0, 3) * ma(2, 2) * ma(3, 1) - ma(0, 2) * ma(2, 3) * ma(3, 1) -
ma(0, 3) * ma(2, 1) * ma(3, 2) + ma(0, 1) * ma(2, 3) * ma(3, 2) +
ma(0, 2) * ma(2, 1) * ma(3, 3) - ma(0, 1) * ma(2, 2) * ma(3, 3);
mb(0, 2) =
ma(0, 2) * ma(1, 3) * ma(3, 1) - ma(0, 3) * ma(1, 2) * ma(3, 1) +
ma(0, 3) * ma(1, 1) * ma(3, 2) - ma(0, 1) * ma(1, 3) * ma(3, 2) -
ma(0, 2) * ma(1, 1) * ma(3, 3) + ma(0, 1) * ma(1, 2) * ma(3, 3);
mb(0, 3) =
ma(0, 3) * ma(1, 2) * ma(2, 1) - ma(0, 2) * ma(1, 3) * ma(2, 1) -
ma(0, 3) * ma(1, 1) * ma(2, 2) + ma(0, 1) * ma(1, 3) * ma(2, 2) +
ma(0, 2) * ma(1, 1) * ma(2, 3) - ma(0, 1) * ma(1, 2) * ma(2, 3);
mb(1, 0) =
ma(1, 3) * ma(2, 2) * ma(3, 0) - ma(1, 2) * ma(2, 3) * ma(3, 0) -
ma(1, 3) * ma(2, 0) * ma(3, 2) + ma(1, 0) * ma(2, 3) * ma(3, 2) +
ma(1, 2) * ma(2, 0) * ma(3, 3) - ma(1, 0) * ma(2, 2) * ma(3, 3);
mb(1, 1) =
ma(0, 2) * ma(2, 3) * ma(3, 0) - ma(0, 3) * ma(2, 2) * ma(3, 0) +
ma(0, 3) * ma(2, 0) * ma(3, 2) - ma(0, 0) * ma(2, 3) * ma(3, 2) -
ma(0, 2) * ma(2, 0) * ma(3, 3) + ma(0, 0) * ma(2, 2) * ma(3, 3);
mb(1, 2) =
ma(0, 3) * ma(1, 2) * ma(3, 0) - ma(0, 2) * ma(1, 3) * ma(3, 0) -
ma(0, 3) * ma(1, 0) * ma(3, 2) + ma(0, 0) * ma(1, 3) * ma(3, 2) +
ma(0, 2) * ma(1, 0) * ma(3, 3) - ma(0, 0) * ma(1, 2) * ma(3, 3);
mb(1, 3) =
ma(0, 2) * ma(1, 3) * ma(2, 0) - ma(0, 3) * ma(1, 2) * ma(2, 0) +
ma(0, 3) * ma(1, 0) * ma(2, 2) - ma(0, 0) * ma(1, 3) * ma(2, 2) -
ma(0, 2) * ma(1, 0) * ma(2, 3) + ma(0, 0) * ma(1, 2) * ma(2, 3);
mb(2, 0) =
ma(1, 1) * ma(2, 3) * ma(3, 0) - ma(1, 3) * ma(2, 1) * ma(3, 0) +
ma(1, 3) * ma(2, 0) * ma(3, 1) - ma(1, 0) * ma(2, 3) * ma(3, 1) -
ma(1, 1) * ma(2, 0) * ma(3, 3) + ma(1, 0) * ma(2, 1) * ma(3, 3);
mb(2, 1) =
ma(0, 3) * ma(2, 1) * ma(3, 0) - ma(0, 1) * ma(2, 3) * ma(3, 0) -
ma(0, 3) * ma(2, 0) * ma(3, 1) + ma(0, 0) * ma(2, 3) * ma(3, 1) +
ma(0, 1) * ma(2, 0) * ma(3, 3) - ma(0, 0) * ma(2, 1) * ma(3, 3);
mb(2, 2) =
ma(0, 1) * ma(1, 3) * ma(3, 0) - ma(0, 3) * ma(1, 1) * ma(3, 0) +
ma(0, 3) * ma(1, 0) * ma(3, 1) - ma(0, 0) * ma(1, 3) * ma(3, 1) -
ma(0, 1) * ma(1, 0) * ma(3, 3) + ma(0, 0) * ma(1, 1) * ma(3, 3);
mb(2, 3) =
ma(0, 3) * ma(1, 1) * ma(2, 0) - ma(0, 1) * ma(1, 3) * ma(2, 0) -
ma(0, 3) * ma(1, 0) * ma(2, 1) + ma(0, 0) * ma(1, 3) * ma(2, 1) +
ma(0, 1) * ma(1, 0) * ma(2, 3) - ma(0, 0) * ma(1, 1) * ma(2, 3);
mb(3, 0) =
ma(1, 2) * ma(2, 1) * ma(3, 0) - ma(1, 1) * ma(2, 2) * ma(3, 0) -
ma(1, 2) * ma(2, 0) * ma(3, 1) + ma(1, 0) * ma(2, 2) * ma(3, 1) +
ma(1, 1) * ma(2, 0) * ma(3, 2) - ma(1, 0) * ma(2, 1) * ma(3, 2);
mb(3, 1) =
ma(0, 1) * ma(2, 2) * ma(3, 0) - ma(0, 2) * ma(2, 1) * ma(3, 0) +
ma(0, 2) * ma(2, 0) * ma(3, 1) - ma(0, 0) * ma(2, 2) * ma(3, 1) -
ma(0, 1) * ma(2, 0) * ma(3, 2) + ma(0, 0) * ma(2, 1) * ma(3, 2);
mb(3, 2) =
ma(0, 2) * ma(1, 1) * ma(3, 0) - ma(0, 1) * ma(1, 2) * ma(3, 0) -
ma(0, 2) * ma(1, 0) * ma(3, 1) + ma(0, 0) * ma(1, 2) * ma(3, 1) +
ma(0, 1) * ma(1, 0) * ma(3, 2) - ma(0, 0) * ma(1, 1) * ma(3, 2);
mb(3, 3) =
ma(0, 1) * ma(1, 2) * ma(2, 0) - ma(0, 2) * ma(1, 1) * ma(2, 0) +
ma(0, 2) * ma(1, 0) * ma(2, 1) - ma(0, 0) * ma(1, 2) * ma(2, 1) -
ma(0, 1) * ma(1, 0) * ma(2, 2) + ma(0, 0) * ma(1, 1) * ma(2, 2);

mb /= determinanta(ma);
}

You need to write the determinanta() function separately.

You can similarly write similar custom inversions for 3x3, 2x2 matrices, if you want. They are probably much faster than a generic function would be.

That’s quite an inefficient way to program a matrix inverse but again sometime it doesn’t matter.

Anyway I think this is quite a demonstration, I afraid I have to follow Alfonse on that. I wonder for what purpose tvmet have been designed but certainly not for graphics and I am pretty sure that we could find many alternatives would be more feature complete.

It may be inefficient, but it’s also a demonstration (and not a part of tvmet), as you noted. I have stressed, that you can implement an arbitrary inversion algorithm with tvmet, it does not provide it’s own, however.

What inversion algorithm would you suggest? Or for that matter, what algorithm have you implemented in glm? For small dimensions, as you have mentioned, it does not matter much what you use. As for what tvmet was designed for, from it’s homepage: fast matrix calculations for matrices of small dimensions, which is exactly what is needed in graphics.

I mean, what if the user of your glm library wants to invert an orthogonal matrix using an inversion algorithm but not transpose it? Whatever algorithm you have implemented cannot beat transpose. It is up to the user to be smart about inversion.

Honestly folks, if you’re worried that much about the efficiency of software matrix calculations in your program, you’re either:[ul][]doing something badly wrong, or[]doing some academic work on matrix library efficiency, or[]a religious evangelist, or[]micro-optimizing something that isn’t a bottleneck.[/ul]Me, I use a home-grown matrix library based on the D3D matrix functions (together with my own GLMATRIX type based on D3DMATRIX). Why? Simple reason is so that I can easily port code between OpenGL and D3D where required, and also because they’re what I’m used to and what I like. No optimizations, pure C, and I’ve never noticed any performance problems. It’s really not worth sweating the details on this one.

Alfonse, how can you state generally that an arbitrary compiler is not going to use SSE instructions? Gcc on 64-bit platforms uses them by default.

I’m talking about vectorized SSE instructions. That is, being aware that 4x4 matrices can be stored in 4 vector SSE instructions, doing matrix multiplies with SSE vector opcodes, etc.

That generally can’t happen without intrinsics or hand-written assembly.

So what if glm is simpler, perf is also important.

Again, if performance is important to you, then you need to be using a library that uses SSE intrinsics. And if it’s not important to you, then ease of use is important. In which case GLM is much better.

However good tvmet’s performance is, it isn’t as good as it could be. So the performance conscious programmer will just write their own that uses SSE intrinsics. And tvmet has the worst documentation in the world, so it’s horrible to actually use, especially for a new programmer.

Tvmet is fit for neither the performance-conscious user, nor the user who needs an simple, feature-rich library.

Lastly, the boost project itself recommends tvmet. So if you use boost, tvmet is the library to use along with it.

I love Boost, but that doesn’t mean I blindly listen to what “they” say in all things. Using Boost does not mean I have to use everything else that they suggest; it doesn’t take away my free will or my ability to detect horrible documentation. Especially since Boost goes out of their way in many libraries to provide really good documentation.

Also, I’m pretty sure Boost works just fine with GLM.

There is no built-in inverse in tvmet AFAIK, things like that are not it’s purpose, the ease of work with matrices and performance is.

Think about it. It would be easier to invert a matrix if I had an inverse function. So either “ease of work with matrices” is not tvmet’s purpose, or tvmet is not fulfilling that purpose very well.

I mean, what if the user of your glm library wants to invert an orthogonal matrix using an inversion algorithm but not transpose it?

Then they write one. This isn’t rocket science here. If a library doesn’t provide exactly what you need, you write it yourself.

GLM provides more features than tvmet. The provided inverse function may not be appropriate in all cases, but it is appropriate in many. A function not being appropriate in all cases does not mean that the user should have to write it themselves in all cases.

Libraries should provide general utility. Corner cases will have to be dealt with by the user. But the existence of corner cases does not excuse a library maker from skimping on functionality.

It is up to the user to be smart about inversion.

That’s not promoting ease of use, is it?

Again, if performance is important to you, then you need to be using a library that uses SSE intrinsics. And if it’s not important to you, then ease of use is important. In which case GLM is much better.

However good tvmet’s performance is, it isn’t as good as it could be. So the performance conscious programmer will just write their own that uses SSE intrinsics. And tvmet has the worst documentation in the world, so it’s horrible to actually use, especially for a new programmer.

Tvmet is fit for neither the performance-conscious user, nor the user who needs an simple, feature-rich library.

Again, what if there are no SSE instructions available? In such a situation I would not be surprised if tvmet beat glm. Furthermore intrinsics; I believe that’s a M$ thing for now, not really portable. And as far as ease of use is concerned: I don’t really care, the fact is however; you don’t have to compile/link tvmet, it is a header-only library. Also, the library apparently does eliminate temporaries, which is very important to me, as I wrote.

We won’t come any further in our discussions unless someone produces some benchmarks.

I’ve tried to compile this library glm, and there’s a dummy file it links. After I run it on my linux system I get:

glm: /home/___/glm-0.9.0.6/glm/core/…/./core/type_mat2x3.inl:35: glm::detail::tvec3<T>& glm::detail::tmat2x3<T>::operator [with T = float]: Assertion `i < this->row_size()’ failed.
Aborted

In truth, I don’t know if the assert is raised on purpose. But if it is not, it’s a bug.

Again, what if there are no SSE instructions available?

This is why people who have actual needs for CPU vector math performance write their own vector math libraries. There would be tests for different supported CPUs, and the most optimal code would be chosen for each. Whether SSE, 3DNow, the intrinsics supported on console CPUs, or x87.

In such a situation I would not be surprised if tvmet beat glm.

Nobody’s disputing that. We’re disputing whether it matters.

Most people do not need high CPU vector math performance. More people need a matrix inverse function than high CPU vector math performance. These people should use the most convenient library. And GLM beats tvmet hands down in that department.

Using a higher performance library only matters if that is an actual performance bottleneck. Otherwise, all the time spent using it, making up for its usability deficiencies, could have been spent for something actually useful.

For example, I use Boost.Spirit for my parsing needs when XML and Lua don’t cover my bases. I don’t use it because it is a fast parser. I use it because it is convenient, fairly easy to use, works well, and somewhat well-documented. For my needs, the fact that it is a fast parser is unimportant; it’s a nice bonus, but it’s nothing I’m thinking about (especially when sitting through a 10-minute parser compilation…).

If tvmet were as good as GLM in all of the areas where GLM is good, then the extra performance would be a nice bonus. But as it stands, GLM is the better library from a usability perspective. And since most people do not need the fastest possible performance from CPU vector math, it is a non-issue for them. GLM is the better library for their needs.

Furthermore intrinsics; I believe that’s a M$ thing for now, not really portable.

GCC has SSE intrinsics.

And of course intrinsics aren’t portable. They’re not supposed to be. Low-level code is often not portable. It’s purpose is to be optimal for a specific configuration of hardware, which requires testing on a specific configuration of compiler.

It’s like coding for the global optimizer in Visual Studio and expecting the GCC optimizer to optimize in the same way. Of course it can’t.

People who have a real need for performance pick a single compiler and optimize their code for that platform.

the fact is however; you don’t have to compile/link tvmet, it is a header-only library.

So is GLM.

Furthermore, if you can’t compile a library (assuming that it comes with a decent build system), you aren’t much of a programmer, so you wouldn’t be able to do much with it anyway. Documentation quality is far more important metric for the usability a library than whether it is header-only.

Also, the library apparently does eliminate temporaries, which is very important to me, as I wrote.

Why is that “very important” to you? What are your programming needs where the number of matrix math temporaries are an actual issue for you?

Nobody’s disputing that. We’re disputing whether it matters.

Most people do not need high CPU vector math performance. More people need a matrix inverse function than high CPU vector math performance. These people should use the most convenient library. And GLM beats tvmet hands down in that department.

Using a higher performance library only matters if that is an actual performance bottleneck. Otherwise, all the time spent using it, making up for its usability deficiencies, could have been spent for something actually useful.

For example, I use Boost.Spirit for my parsing needs when XML and Lua don’t cover my bases. I don’t use it because it is a fast parser. I use it because it is convenient, fairly easy to use, works well, and somewhat well-documented. For my needs, the fact that it is a fast parser is unimportant; it’s a nice bonus, but it’s nothing I’m thinking about (especially when sitting through a 10-minute parser compilation…).

If tvmet were as good as GLM in all of the areas where GLM is good, then the extra performance would be a nice bonus. But as it stands, GLM is the better library from a usability perspective. And since most people do not need the fastest possible performance from CPU vector math, it is a non-issue for them. GLM is the better library for their needs.

I’ve checked glm’s inverse and it’s laughable. It assumes the argument is an “affine” matrix, well, what if it is not, is this the general inverse one wants? And, of course, there are a ton of temporaries generated in there. Also, how is the inverse transpose matrix calculated? Using the same old determinant method I’ve presented in my previous post, that was then criticized. Then there are these gems:


     template <typename T>
        inline T fastPow(const T x, int y)
    {
        T f = T(1);
        for(int i = 0; i < y; ++i)
            f *= x;
        return f;
    }

    inline T fastLog(const T x)
    {
        return std::log(x);
    }

One look at this and I see this library is no good for me. The exponentiation in particular can be made much faster easily. Also, there is this issue of portability, the assert I’ve managed to raise on my linux system. For my purposes, the docs are good enough, but I agree matrix operation speed is not the highest priority in most cases and anyway, if glm does not work on linux and tvmet does, I have no choice but to use tvmet or some other library, or something I’d write myself.

About the temporaries: ever since reading some article in the Game Gems (forgotten which part and which article) and in particular, after playing a little with the profiler, I am paranoid about them and avoid them in the rendering loop if possible and I am hard pressed for perf in my current app.

EDIT:
Anyway, we’re discussing perf but no one bothers to bench. If there are any glm zealots here, maybe you can do some benches.

Another question, (I’ll start a new topic if you suggest it)

Is there a way to make integer vec or mats in glm?

[quote=“mhagain”]
Honestly folks, if you’re worried that much about the efficiency of software matrix calculations in your program, you’re either:[ul][li]doing something badly wrong, or[]doing some academic work on matrix library efficiency, or[]a religious evangelist, or[*]micro-optimizing something that isn’t a bottleneck.[/ul]Me, I use a home-grown matrix library based on the D3D matrix functions (together with my own GLMATRIX type based on D3DMATRIX). Why? Simple reason is so that I can easily port code between OpenGL and D3D where required, and also because they’re what I’m used to and what I like. No optimizations, pure C, and I’ve never noticed any performance problems. It’s really not worth sweating the details on this one. [/li][/quote]

I vote “a religious evangelist” so I am not going to reply furthermore.

GLM has integer vec but doesn’t have integer matrices, as GLSL defined it. Do you have a specific need for this?