Hi,

Hope this is the right place to ask this kind of question.

I’m doing an exercise where I created a cube out of several quads. I then created a projection matrix (according to the exercise description) looking like this:

```
float n = 1.0, f = 10.0; // near and far clipping planes
float a = (f + n) / (f - n);
float b = 2 * f * n / (f - n);
GLfloat projectionMatrix[4][4] = {
{1.0, 0.0, 0.0, 0.0},
{0.0, 1.0, 0.0, 0.0},
{0.0, 0.0, -a, -b},
{0.0, 0.0,-1.0, 0.0}};
```

I added a modelview matrix, rotating the cube and translating it along the z-axis:

```
GLfloat modelviewMatrix[4][4] = {
{cos(time) , 0.0, sin(time), 0.0},
{0.0 , 1.0, 0.0 , 0.0},
{-sin(time), 0.0, cos(time), -4.0},
{0.0 , 0.0, 0.0 , 1.0}};
```

I sent both the projection and the modelview matrices to the vertex shader where I multiply them with the vertice positions. This was working perfectly fine, until I added GLM. Now I’m supposed to change the modelview matrix to a glm::mat4 matrix and compute the translation and rotation using glm::translate and glm::rotate. The problem is, as soon as I even use glm::translate my cube disappears, never to be seen again.

Obviously I’m doing something wrong. I tried just translating at first:

```
glm::mat4 modelviewMatrix = glm::translate(
glm::mat4(1.0f),
glm::vec3(0.0f, 0.0f, -4.0f));
```

I guess this is not equivalent to the translation part of the previous matrix? Btw, the projection matrix has to remain as it is.

Thankful for all hints!

Cheers!

P.S. I changed the glUniformMatrix4fv call to use glm::value_ptr(modelviewMatrix) instead of modelviewMatrix[0]