Repetitions when rendering distance to eyepoint

I am getting a result that simply does not make sense to me given the code.

I use glfw to create a simple interactive display where I am rendering two cubes of different size and a floor (I am using this as a testing ground for other calculations I want to develop). I can zoom in/out and move the camera at will. When I press ‘c’ I want to save the distance from the camera position to the cubes and floor. After that I just want to return to exploring the scene.

The shader I use to ‘capture’ this distance is very basic (the one used to render on the screen is not shown here).

Vertex:

#version 330 core
layout (location = 0) in vec3 aPos;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
out vec4 pos;

void main() {
    pos = view * model * vec4(aPos, 1.0f);   
    gl_Position = projection * view * model * vec4(aPos, 1.0f);}

Fragment:

#version 330 core
in vec4 pos;
out float depth;

void main() {
    depth = length(pos.xyz); }

The framebuffer to capture this information is created before the rendering loop,

    # create depth buffer
    fbo= glGenFramebuffers(1)
    glBindFramebuffer(GL_FRAMEBUFFER, fbo)

    # COLOR DEPTH 
    color_depth_tex = glGenTextures(1)
    glBindTexture(GL_TEXTURE_2D, color_depth_tex)
    glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RED, GL_FLOAT, None)
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
    glBindTexture(GL_TEXTURE_2D, 0)
    glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, color_depth_tex, 0)

    # DEPTH 
    depth_tex = glGenTextures(1)
    glBindTexture(GL_TEXTURE_2D, depth_tex)
    glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, SCR_WIDTH, SCR_HEIGHT, 0, GL_DEPTH_COMPONENT, GL_FLOAT, None)
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
    glBindTexture(GL_TEXTURE_2D, 0)
    glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depth_tex, 0)

The rendering loop is,

    shader= shaderDisplay
    while (not glfwWindowShouldClose(window)):

        # input
        processInput(window)

        # define basic viewing parameters
        view = camera.GetViewMatrix()
        projection = glm.perspective(glm.radians(camera.Zoom), SCR_WIDTH / SCR_HEIGHT, 0.1, 100.0)     

        if camera.capture:
            # bind the framebuffer
            glBindFramebuffer(GL_FRAMEBUFFER, fbo)
            glClearDepth(depth_tex)

            # set shader to shader depth
            shader = shaderDepth
            
        # clear colors and buffers
        glClearColor(0.1, 0.1, 0.1, 1.0)
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT)
        glEnable(GL_DEPTH_TEST)

        # set uniforms on shader
        shader.use()
        shader.setMat4("view", view)
        shader.setMat4("projection", projection)

        # RENDER OBJECTS       
        # draw floor
        glBindVertexArray(planeVAO)
        glBindTexture(GL_TEXTURE_2D, floorTexture)
        shader.setMat4("model", glm.mat4(1.0))
        glDrawArrays(GL_TRIANGLES, 0, 6)
        glBindVertexArray(0)
       
        # cube 1
        glBindVertexArray(cubeVAO)
        glActiveTexture(GL_TEXTURE0)
        glBindTexture(GL_TEXTURE_2D, cubeTexture)
        model = glm.mat4(1.0)
        model = glm.translate(model, glm.vec3(0.0, 0.0, -0.75))
        shader.setMat4("model", model)
        glDrawArrays(GL_TRIANGLES, 0, 36)
        
        # cube 2
        scale = 0.35
        model = glm.mat4(1.0)
        model = glm.scale(model, glm.vec3(scale, scale, scale))
        model = glm.translate(model, glm.vec3(0.0, 0, 2.0))
        shader.setMat4("model", model)
        glDrawArrays(GL_TRIANGLES, 0, 36)
        glBindVertexArray(0)
        
        # end of capture
        if camera.capture:
            # save outputs            
            depth = glReadPixelsf(0, 0, SCR_HEIGHT, SCR_WIDTH, GL_RED, GL_FLOAT)
            np.save('depth', np.flipud(depth))

            # unbind framebuffer back to default
            glBindFramebuffer(GL_FRAMEBUFFER,0)

            # reset capture flag
            camera.capture = False

            # reset shader
            shader= shaderDisplay

        glfwSwapBuffers(window)
        glfwPollEvents()

When I capture the following scene

the output I get is the following,
image

What I cannot understand is why I get repeated instances of my cube instead of a single one? (Given the vertex and fragment shaders, and given that I am not rendering again once I capture the scene). Also, this might be related to the first error, why does the result appear as bands?

It appears for some weird reason… that changes in the vertex shader do not appear to have any effect on the output. If I remove the view or the model matrix I get the same result. I thought perhaps this was because the program was using an incorrect shader but I have checked and this is not the case???

See the following post for a solution and question related to this.

I’m sure that is the bug place:

Just change SCR_HEIGHT and SCR_WIDTH places.

I wrote You before:

In the branch How to capture an image drawn on a different texture attachment?.

Thanks @nimelord

Yes, I can exchange these but … then it is because of a bug???

The prototype for glReadPixels is

void glReadPixels(
 	GLint x,
  	GLint y,
  	GLsizei width,
  	GLsizei height,
  	GLenum format,
  	GLenum type,
  	void * data);

I.e. width comes before height.

Note the PyOpenGL coerces the array to the specified dimensions, so it reads width*height pixels into a 1D array then coerces that to a 2D [height,width] array. If you swap the order of the parameters, the resulting array will have the dimensions transposed and the data will be misaligned. Note that OpenGL itself will clip the rectangle to the dimensions of the framebuffer, so if you request a 480×640 rectangle from a 640×480 framebuffer you’ll only get 480×480 pixels of data returned.

Thank you @GClements

I suspected that this might have to do with pyOpengl… if I understand you correctly it is just a matter of switching these dimensions around (when using pyOpengl that is).

  • So where is your glViewport() setting? (See also mentions of viewport here and here)
  • glClearDepth() does not take a GL texture handle. Unless you know what you’re doing, leave this set to 1.0 and don’t change it.
  • Try just clearing the COLOR and DEPTH buffers, rendering nothing else, and calling swap buffers. Does that work?

Thank you @Dark_Photon

Yes,… I was originally confused about glClearDepth() … thinking that one had to clear the texture instead of just being the value you use to clear the depth.
I did set out the explicitly set out the viewport within the rendering loop (I keep the width and height fixed) and I left out the stencil buffer when clearing (as you suggested).
I get a similar result as before.
M.