Projection matrix doesn't work

GameGuru, … use the glm::lookAt() and apply whatever transformations you like in the model-matrix.
The view-matrix is an abstraction.

1 Like

Got it working like so :

 glm::mat4 projection = glm::perspective(glm::radians(120.0f), window_width / (float)window_height, 0.1f, 100.0f);

    glm::mat4 view = glm::lookAt
    (
        glm::vec3(0, 0, 1), 
        glm::vec3(0, 0, 0), 
        glm::vec3(0, 1, 0)  
    );

    glm::mat4 model_view = glm::rotate(view, glm::radians(-55.0f), glm::vec3(1.0f, 0.0f, 0.0f));

    glm::mat4 model_view_projection = projection * model_view;

I replaced glm::translation with glm:lookAt. My understanding is that my camera is now at Z of 1, while the bounds is still -0.1 to -100 (so it’s as if it’s on the outside of the screen pointed at it). It’s looking right at the origin, where my vertices are centered, and is thus able to see everything.

My question is, will I be able to do all the translations simply using this new view matrix, will I even have to bother with glm::translate?

Also why didn’t yall tell me about this function :stuck_out_tongue_winking_eye:

Any combination of rotate/translate transformations can be converted to a lookAt transformation. And vice versa (although the reverse has some redundancy; ultimately you only need 6 scalar parameters to uniquely specify such a transformation). Which is preferable depends upon what information you start with. If you have the lookAt parameters (eye position, target position, up direction) immediately available, then lookAt makes it straightforward to calculate the view part of the model-view transformation. You would typically still use rotate/translate for the model part.

In practice, lookAt is most useful if the camera is attached to some form of vehicle whose position is obtained by integrating its velocity, and the desired orientation is “whatever it needs to be to end up looking directly at some other point”. If you want more direct control of the camera (e.g. a “VR” style model where the user decides where to look), you’d normally generate the view transformation via translate/rotate.

1 Like

I see, this gives me a much better understanding of how this all works. My project will eventually be a playable video game with a moving player that can look and go anywhere. It looks like it would be better to use translate / rotate based on what you said. Thanks.

My understanding is that my camera is now at Z of 1

GameGuru, the lookAt() asks for a model-position and a camera-position. You delivered it and then wonder if you really did it.
The beauty of lookAt() is that it’s name and parameters are very intuitive. The parameters is your world-coordinates. You can obtain the same ‘appearance’ of the model by rotating if 45 degrees OR the camera -45 degrees in an orbit around the model. Rotating the camera will also influence other models … the take home: do transformations where they belong. The model-matrix is part of the array of matrix-operations … if you happens to have made the model 100 times larger than you need, you scale it down. You’ll come to change model-matrix (translating it round in your world) and thus the ‘view’ often and projection only once, but you’ll have to multiply them all before every draw-call if any of them changed.
Look-up references on composing the view-matrix from first principles … you’ll need to know once you get to lighting and such(transforming normals).

1 Like

Okay now I have multiple cubes, but there is still an odd warping going on.

const GLchar *vertexShaderSource = R"(
#version 330 core

layout (location = 0) in vec3 pos;
layout (location = 1) in vec2 coord;
layout (location = 2) in vec2 offset;

out vec2 TexCoord;

uniform mat4 MVP; 

void main() 
{
    gl_Position = MVP * vec4(pos.x + offset.x, pos.y + offset.y, pos.z, 1.0);
    TexCoord = coord;
}
)";

const GLchar *fragmentShaderSource = R"(
#version 330 core

in vec2 TexCoord;
out vec4 Color;

uniform sampler2D ourTexture;

void main() 
{
    Color = texture(ourTexture, TexCoord);
}
)";

int render(GLFWwindow *window, GLuint window_width, GLuint window_height)
{
    float vertices[] = 
    {
        // positions      // texture coords

        //front
        0.2f, 0.2f, 0.0f, 1.0f, 1.0f,    // top right
        0.2f, -0.2f, 0.0f, 1.0f, 0.0f,   // bottom right
        -0.2f, -0.2f, 0.0f, 0.0f, 0.0f,  // bottom left
        -0.2f, 0.2f, 0.0f, 0.0f, 1.0f,   // top left 
        //back
        0.2f, 0.2f, -0.4f, 1.0f, 1.0f,   // top right
        0.2f, -0.2f, -0.4f, 1.0f, 0.0f,  // bottom right
        -0.2f, -0.2f, -0.4f, 0.0f, 0.0f, // bottom left
        -0.2f, 0.2f, -0.4f, 0.0f, 1.0f,  // top left 
        //right 
        0.2f, 0.2f, 0.0f, 1.0f, 1.0f,    // top right
        0.2f, -0.2f, 0.0f, 1.0f, 0.0f,   // bottom right
        0.2f, -0.2f, -0.4f, 0.0f, 0.0f,  // bottom left
        0.2f, 0.2f, -0.4f, 0.0f, 1.0f,   // top left 
        //left 
        -0.2f, 0.2f, 0.0f, 1.0f, 1.0f,   // top right
        -0.2f, -0.2f, 0.0f, 1.0f, 0.0f,  // bottom right
        -0.2f, -0.2f, -0.4f, 0.0f, 0.0f, // bottom left
        -0.2f, 0.2f, -0.4f, 0.0f, 1.0f,  // top left
        //top 
        0.2f, 0.2f, 0.0f, 1.0f, 1.0f,    // top right
        0.2f, 0.2f, -0.4f, 1.0f, 0.0f,   // bottom right
        -0.2f, 0.2f, -0.4f, 0.0f, 0.0f,  // bottom left
        -0.2f, 0.2f, 0.0f, 0.0f, 1.0f,   // top left 
        //bottom
        0.2f, -0.2f, 0.0f, 1.0f, 1.0f,   // top right
        0.2f, -0.2f, -0.4f, 1.0f, 0.0f,  // bottom right
        -0.2f, -0.2f, -0.4f, 0.0f, 0.0f, // bottom left
        -0.2f, -0.2f, 0.0f, 0.0f, 1.0f   // top left
    };

    unsigned int indices[] = 
    {
        0,  1,  2,  0,  2,  3,   //front
        4,  5,  6,  4,  6,  7,   //right
        8,  9,  10, 8,  10, 11,  //back
        12, 13, 14, 12, 14, 15,  //left
        16, 17, 18, 16, 18, 19,  //upper
        20, 21, 22, 20, 22, 23   //bottom
    };

    glm::vec2 translations[4];

    for(int i = 0; i < 4; i++) 
    {
        translations[i] = glm::vec2(0.4f * i, 0.4f * i);
    }

    // Vertex Array, Vertex Buffer and Element Buffer
    GLuint VAO, VBO, EBO, iVBO;
    glGenVertexArrays(1, &VAO);
    glGenBuffers(1, &VBO);
    glGenBuffers(1, &EBO);
    glGenBuffers(1, &iVBO);

    glBindVertexArray(VAO);
    glEnable(GL_DEPTH_TEST);

    glBindBuffer(GL_ARRAY_BUFFER, VBO);
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

    glEnableVertexAttribArray(0);
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)0);

    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
    glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);

    glEnableVertexAttribArray(1);
    glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)(3 * sizeof(float)));

    glBindBuffer(GL_ARRAY_BUFFER, iVBO);
    glBufferData(GL_ARRAY_BUFFER, sizeof(translations), &translations[0], GL_STATIC_DRAW);

    glEnableVertexAttribArray(2);
    glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), (void*)0);
    glVertexAttribDivisor(2, 1);

    shader(window_width, window_height);

    // Load Image
    int width, height, channels;
    unsigned char *image = SOIL_load_image("grass.png", &width, &height, &channels, SOIL_LOAD_RGB);

    // Texture
    unsigned int texture;
    glGenTextures(1, &texture);
    glBindTexture(GL_TEXTURE_2D, texture);

    // Settings
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);

    // Texture Image
    if (image) 
    {
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
        glGenerateMipmap(GL_TEXTURE_2D);
    }

    else
    {
        out << "Failed to load texture" << end;
    }

    SOIL_free_image_data(image);

    // Render Loop
    while (!glfwWindowShouldClose(window))
    {
        processInput(window);

        glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

        //glBindTexture(GL_TEXTURE_2D, texture);
        //glUseProgram(shaderProgram);

        glBindVertexArray(VAO);
        glDrawElementsInstanced(GL_TRIANGLES, 36, GL_UNSIGNED_INT, 0, 4); 

        glfwSwapBuffers(window);
        glfwPollEvents();
    }

    glDeleteVertexArrays(1, &VAO);
    glDeleteBuffers(1, &VBO);
    glDeleteBuffers(1, &EBO);
    glDeleteBuffers(1, &iVBO);

    glfwTerminate();
    return 0;
}

void shader(GLuint window_width, GLuint window_height)
{
    GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
    glShaderSource(vertexShader, 1, &vertexShaderSource, NULL);
    glCompileShader(vertexShader);

    GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
    glShaderSource(fragmentShader, 1, &fragmentShaderSource, NULL);
    glCompileShader(fragmentShader);

    shaderProgram = glCreateProgram();
    glAttachShader(shaderProgram, vertexShader);
    glAttachShader(shaderProgram, fragmentShader);
    glLinkProgram(shaderProgram);
    glUseProgram(shaderProgram);

    glm::mat4 lookAt = glm::lookAt
    (
        glm::vec3(0, 0, 1), // Camera is at (x,y,z), in World Space,
        glm::vec3(0, 0, 0), // and looks at the origin.
        glm::vec3(0, 1, 0)  // Head is up (set to 0,-1,0 to look upside-down).
    );
    
    glm::mat4 projection = glm::perspective(glm::radians(120.0f), window_width / (float)window_height, 0.1f, 100.0f);
    glm::mat4 view_projection = glm::translate(projection, glm::vec3(0.0f, 0.0f, -1.0f));
    glm::mat4 model_view_projection = glm::rotate(view_projection, glm::radians(0.0f), glm::vec3(1.0f, 0.0f, 0.0f));

    // retrieve the matrix uniform locations
    GLuint MVPLoc = glGetUniformLocation(shaderProgram, "MVP");

    // pass them to the shaders (3 different ways)
    glUniformMatrix4fv(MVPLoc, 1, GL_FALSE, glm::value_ptr(model_view_projection));

    glDeleteShader(vertexShader);
    glDeleteShader(fragmentShader);
}

They need to all look like the one in the middle : square, since the rest of the cube should be directly behind.

a quick look suggests a wild confusion on the matrix-situation.
You should think of
model first,
view second
projection last
mvp = project*view*model
The confusion may lie in the way things are done in old and ‘new’ opengl and the order in which matrix-tinkering is done and that the order may appear reversed in the microsoft way too. It’s not that the ‘consept’ is that hard to understand, but that the implementation behind the scene differs adds to the confusion.
This is how I could do a rotation around model center on model somewhere in the ‘world’:
mat4 translate_to_center
mat4 rotate
mat4 transleat_back
mat4 model_matrix = transleat_back* rotate*translate_to_center
… points are processed comming in at the end (at the first matrix)

You can use each transform independently, but in the proper order … or multiply them together.

Thanks for the info, but this is happening without any rotation. The angle is at 0, there is only a translation of the vertices into the Z axis by 1.

The matrix-model example is not trying to mimic your situation in particular. It’s an attempt to make you get the gist of the order of setting up a chain of matrices.

This is what you are looking for:
mvp = project*view*model

I have no clue of what this will do:

glm::mat4 projection = glm::perspective(glm::radians(120.0f), window_width / (float)window_height, 0.1f, 100.0f);
glm::mat4 view_projection = glm::translate(projection, glm::vec3(0.0f, 0.0f, -1.0f));
glm::mat4 model_view_projection = glm::rotate(view_projection, glm::radians(0.0f), glm::vec3(1.0f, 0.0f, 0.0f));

I’ve shown you the matrix-model example,
You have sat up a view-matrix with LookAt()
And you have often sat up the projection-matrix

If it’s it new to you that they can and shall be multiplied together, then you should have followed my earlier suggestion of looking up some basic information on the matrix-topic.

It does the same exact thing, the matrices are passed into others so the multiplication is done alone the way, saving you from having to do it at the end. I have tried both ways and the results are indeed the same.

ok,

This is not supposed to do anything since the angle is 0.
glm::mat4 model_view_projection = glm::rotate(view_projection, glm::radians(0.0f), glm::vec3(1.0f, 0.0f, 0.0f));

If, what you’ve sat up is proper, then the error could be a sort of odd distorsion from the ‘camera’ being way too close to the model and the view-angle of 120 too wide (it’ll match a wide-angle-linse, including it’s distorsions).

Since the model matrix is ‘empty’ you can leave it out.

view = lookAt()
… I’ve never seen a view_projection-matrix of the form
= matrix(mat4 view, … projection params …)
That’s doable for other kinds of transform, so, you cannot have done it the way you say, or you’ve not observed the proper order. If you look at the projection-params, you’ll notice that they are not attached to any specific coordinatesystem. That specific is what you produce in view, and that’s what the projection works on. Hench the order I present to you.

If this below produces the image you present, then you’ll have to go with it and try to change the angle to say 60 and move the camera a little back.
mat4 vp = project*view

I had a glimps at your vertex-shader. The vertex-attribute ‘offset’ could work, but it’s … an “alternative” way of doing things. There’s no problem in delegating that to the proper place of the model-transform. You only have to observe that

m = translate*rotate
has an outcome different than
m = rotate*translate

Another problem is, that the ‘center’ paramerer in lookAt() is the position of the model. If the offset is not part of that position, then you build the wrong view-matrix. Finally, it you add a model-rotate-around-it-center externally, it will not produce the expected.

Lowering the FOV and moving the camera back reduces but does not eliminate distortion. I don’t understand why those sides of the cube are visible when they should be directly behind.

I have already tried leaving out the model matrix.My lookAt matrix is not being used, it’s just there in case I want to use it again (I should’ve commented it out). As stated, I have already tried the manual multiplication approach, with all 3 and with just 2 (projection and view) matrices, the result is the same.

Then you’r not following my advise. It’s a convinience … the alternative way is to erect the matrix from scratch: you have the -z.axis from the camera-position toward what you want to look at. Erect the two other axis from this. Add them (normalized) as the 3 first columns in the matrix, with the 4’th component = 0. Find the position of the camera in relation to what you’r looking at and add it to the 4’th column, fourth component = 1.
I believe that’s what lookAt() does.
To spell it out:
mat4 view=lookAt();
view is the space that the projection picks it’s cone in and starts its divide on objects that differs in their z-values. (Your cubes does not differ in .z). PROVIDED that you multiply this matrix with the projection-matrix.
whatever = project*view
Contrary to what you say, it’s obvious that you do not follow this advice. The projection is the last transformation you do. Here you translate the projection … it’s contrary to any advice.
glm::mat4 view_projection = glm::translate(projection, glm::vec3(0.0f, 0.0f, -1.0f));

I’ll not preclude that you can end on something usable the way you do it, but it’s not a path I can advise you in.

No it isn’t, how could you possible know what I tried? I already tried this multiple times to no avail. I’ve tried every combination already. Projection times lookAt yields the same exact results, with the distortion.

Have you tried to scetch the setup on a piece of paper? Using the approximate values of your setup, you should end up with a scetch where you can evaluate if your own camera-image (replacing you with the camera-position) seems reasonable for your own camera-shot? Would you expect the outermost cube to be on your image?

Could you try to change
glDrawElementsInstanced()
to just
glDrawElements()
I’ve never tried instanced rendering, but I strongly doubt that you can do that kind of drawing like you do it here.

… edit …
I see that you indeed is attempting instanced rendering, sorry
I’m out

I’m not really following that or how it would help, all the tutorials and documents show to do what I already am. I had glDrawElements before, because I only had one cube, and indeed the problem persists with that as well.

As far as I know indexed drawing is a beast all of it’s own.
glDrawElementsBaseVertex()
The elements part indicates indirect drawing (with indices) BaseVertex that you can pick an index in the vertex-data where to start drawing/where the draw-call will start using it’s alotted indices.
You can then call it four times, one for each cube. That’ll enable you to shift uniforms values (typically a uniform color, a model-matrix etc) in between calls … which enables each cube to be an individual entiti in the scene.
Did you investigate if the setup is reasonable?
The easiest thing to test as it is right now, is moving the camera-point and the point it looks at. And, it’s always good to draw 3 lines for the axis’ … for the visual clue to see if things are where they should be.
Do you call opengl_error() anywhere? If no, it might be dying to tell you something you would like to know. Unlike @Dark_Photon I dont have such code to hand out to you right off my head.

The indexed drawing works fine, the problem is confirmed to be due to the projection matrix. With that matrix multiplied by the position vector in the shader alone, I get the distortion if the cube is not at the origin. The other two matrices aren’t the issue.

glUniformMatrix4fv(MVPLoc, 1, GL_FALSE, glm::value_ptr(model_view_projection));

For this to work it may need the same conditions as the draw-call:
use_program()
glBindVertexArray(VAO);

I’m not sure that yours does

did you test glGetError()?
did you test other combinations of camera-position & look_at_point? The center arg in LookAt is not the coordinate-center, but the center of the object you want to look at. Remember to add the offset manually if you look at a specific cube.