Adding to what GClements said, the View matrix is basically the same as a World Matrix except it’s for the camera, and maybe more importantly: it’s inverted. I cover this in my Matrices video at about 46:15 into it.

The way 3D graphics works is all math. So, there is no such thing as a “camera”. If you understand how the Projection matrix works, it merely projects what’s down the primary axis (Z?) onto a 2D surface so that it can be drawn on a 2D monitor. That’s cool, but it doesn’t allow you, or the camera, to move through the 3D world. The world matrix allows you to place things in the 3D world. As long as they happen to be in front of the “camera”/projection they can be seen.

That’s where the view matrix comes in. It moves the entire 3D world to simulate a camera. Except in order to do this, it has to do everything backwards. If it moves the world 3 units forward, it will have the effect of moving the 3D world 3 units backwards visually. If it moves the 3D world 7 units left, it will have the effect of making the viewer think the camera has moved 7 units right. If it rotates the entire world to the right, the viewer will see that as the camera rotating left. Everything must be done backwards from the viewer’s perspective because there is no camera that can move, there is only the ability to move the entire game universe around the projection point to make you think the “camera” is moving through the scene.

The bottom line is that everything done to the view matrix has to be backwards.

I actually got caught up in this just last week and I had to laugh at myself for forgetting when I realized what was happening. I was working on something and told the camera/view-matrix to “go up” and it went down. I fed a translation matrix an up vector and then multiplied to combine it with the view matrix and the result was that it looked as if the camera were moving downward. The funny thing is that I understand how this works and it still caught me off guard. It was about 10 minutes later when I suddenly realized why it does that. You can feed it the opposite data to do the job, or you can do it the way you expect and then invert the resulting matrix. So, if I had of fed it an up vector wanting to move the camera up, then inverted that matrix, and then combined it with the view matrix through multiplication, the result would have been the camera moving up. Instead, I just negated the up vector and turned it into a down vector, which is fine as long as you understand why that’s what you have to do.

I believe a matrix inversion is just a scale matrix where you scale on the X, Y, and Z axes by -1. That’s going to reverse the direction of everything.

Anyway, the forward vector out of your view matrix probably is “camera forward”, but to alter it you need to do the opposite of what you want or build a matrix to change it and then invert that matrix.

Also, the view matrix - just like an object’s world matrix - is subject to order of multiplication problems. One thing that annoys me about GLM is that I can’t seem to just build a translation or rotation matrix; it want’s to rotate it for me even though it doesn’t know how I want the rotation carried out.

With matrices:

A = B*C

is not at all the same as

A=C*B

With rotations one will rotate around the local axis, and the other will rotate around the global axis. Or to restate that more clearly, one will rotate the object and the other will cause it to orbit. Honestly, I don’t know which is which; if it does the opposite of what I want, I reverse it and then it works. But if you are using some sort of “rotation” function that does the math for you, you have no control over what order it multiplies things in. As far as I can tell, GLM doesn’t allow you to do the math yourself by building a rotation matrix, which is why I’m considering writing my own math library. None the less, I got around this fairly simply like so:

```
Triangle.Transform(glm::rotate(glm::mat4(), glm::radians<float>(1), glm::vec3(0.0f, 1.0f, 0.0f)));
```

In this example, it’s object oriented and I’m calling the game object’s Transform method. That looks like this:

```
void HandCodedObjectClass::Transform(glm::mat4 TransformationMatrix)
{
WorldMatrix *= TransformationMatrix;
}
```

So, all it’s doing is multiplying a rotation matrix by the object’s world matrix. The “*=” dictates the order of multiplication, so be careful with that. Sometimes you have to write it out so that you can set the order of multiplication. This would be a rotation rather than an orbit because everything is being done relative to the local axis (note that quaternion multiplication is opposite of this).

But back to the original code, notice that I’m feeding it “glm::mat4()”. This is because I don’t want it to rotate my matrix for me. In this particular case, it would probably be more efficient because I’ve fixed the order here to the same thing it would have used. But there might be a case where I want it to orbit instead of rotate (then I need a different method than my Transform method here). I probably should just get rid of that Transform method; I may do that. It’s one line of code. Anyway, I’m building a rotation matrix by feeding it an empty (Identity) matrix as the input and then specifying my rotation. That’s a way around the fact that I can’t seem to find a way to get GLM to just give me a rotation matrix. In this case, I’m forcing it to go through extra un-needed steps in order to give me a rotation matrix rather than having it carry out the rotation for me where I can’t control it.

But if you’re in this situation where you have a rotation function where it chooses for you how the order of multiplication is carried out and you want to rotate instead of orbit, an old trick is to translate the object to the origin while keeping track of it’s original position. Then you rotate the object at the origin. And finally you translate it back to the original position rotated.

So, all of this is for an object’s world matrix. With a view matrix, remember that anything you do to change it must be inverted. In fact, I would have to test this, but getting values out of a view matrix might require the view matrix to be inverted.

Sure enough, when I look at my code, I have:

```
ConversionMatrix = glm::inverse(View);
CameraPos = (glm::vec3) ConversionMatrix[3]; //Get the camera position from the view matrix.
```

So, I’m inverting the view matrix before I pull the position out of it. And I would need to invert values going back into it. (I believe ConversionMatrix[0 to 2] are the forward, right, and up vectors for the camera [x,y,z]).

I don’t memorize a lot of this stuff; when I need it I usually try it one way and if it doesn’t work I switch it. But that’s another reason why I sometimes get confused about what it’s doing and catch myself doing it wrong. The important thing is understanding why it’s doing it the way it does and then 10 minutes after the fact you’ll suddenly realize why your code’s not working.

Edit:I went to go delete that GameObject’s Transformation method and realized why I’m doing it that way. It’s the object oriented idea that the object’s world matrix belongs to the object and should not be accessed directly from outside the object. If I’m going to do it this way, I should probably create a rotate, orbit, and translate method that eliminate the need to even build a rotation or translation matrix and obfuscates the whole thing. Or, I could expose the object’s world matrix publicly and operate on it directly, which may break the principles of OOP, but actually makes a fair amount of sense since the ability to read that matrix and write to it is needed. In C#, we would have used properties (getter’s and setter’s) for this sort of thing to have the best of both worlds there. Not sure which way I’ll go with it, but I should probably commit to going one way or the other rather than being half way in between like it is now; I should probably either expose the matrix or get rid of the Transform matrix and create Rotate, Orbit, and Translate methods the completely obfuscate the math.