Point cloud rendering artifacts

Hi,
I am trying to display a point cloud using PyOpenGL and pygame.
My code reads the .ply file and displays the point cloud but produces some weird artifacts at certain regions of the cloud.

This is what I get. E.g. the face looks weird.

This is what I expect:

This is my current code:

if __name__ == "__main__":
    #initialize window
    pg.init()
    pg.display.set_caption('Test')
    display = (1680, 1050)
    pg.display.set_mode(display, DOUBLEBUF | OPENGL)

    #read data
    pointcloud = "pointcloud.ply"
    pointcloud = o3d.io.read_point_cloud(pointcloud)
    points = np.asarray(pointcloud.points, dtype=np.float32)
    colors = np.asarray(pointcloud.colors, dtype=np.float32)

    # vertexBuffer
    vbo = glGenBuffers(1)
    glBindBuffer(GL_ARRAY_BUFFER, vbo)
    glBufferData(GL_ARRAY_BUFFER, points, GL_STATIC_DRAW)

    # colorBuffer
    cba = glGenBuffers(1)
    glBindBuffer(GL_ARRAY_BUFFER, cba)
    glBufferData(GL_ARRAY_BUFFER, colors, GL_STATIC_DRAW)

    glEnableClientState(GL_VERTEX_ARRAY)
    glEnableClientState(GL_COLOR_ARRAY)

    gluPerspective(60, display[0] / display[1], 0.1, 2500)

    glClearColor(255, 255, 255, 1.0)  # backround color white
    glTranslate(-200, -500, -1800)
    while True:
        for event in pg.event.get():
            if event.type == pg.QUIT:
                pg.quit()
                quit()

        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
        glBindBuffer(GL_ARRAY_BUFFER, vbo)
        glVertexPointer(3, GL_FLOAT, 3*4, None)
        glBindBuffer(GL_ARRAY_BUFFER, cba)
        glColorPointer(3, GL_FLOAT, 3*4, None)
        glDrawArrays(GL_POINTS, 0, len(points))

        pg.display.flip()
        pg.time.wait(30)

What is causing these artifacts and how can I fix these?

Try increasing the near distance from 0.1 and see if that fixes it.

I tried different values up to this one

gluPerspective(60, display[0] / display[1], 1500, 2000)

But the result is always exactly the same

Is depth testing actually enabled?

Alright, the combination of a higher near distance and enabling depth testing fixed it, thanks! :slightly_smiling_face:

I understand why depth testing fixes this problem, but I don’t understand why the near distance needs to be increased.
I’ve read that usually the near plane should be close to the camera position.
Could you maybe give a short explanation please?

The near distance determines the depth resolution. To a fairly close approximation, the range of eye-space Z values beyond N times the near distance gets 1/N of the available depth values. E.g. half the depth values are used for -Zeye values between Znear and 2*Znear and half for values greater than 2*Znear. 90% of the depth values are used for -Zeye values between Znear and 10*Znear and 10% for values greater than 10*Znear.

If your points are roughly 1000 units from the viewpoint but your near distance is 0.1, only 1/10000 of the depth values are used for points that far away, while 9999/10000 are used for the region between 0 and 1000. For a 24-bit depth buffer, that leaves only around 1677 distinct depth values for -Zeye values beyond 1000. For a 16-bit depth buffer, it’s only 6 or 7 distinct depth values. The net result is that the front and back surfaces of the model may end up having exactly the same depth value, in which case you get whichever point is drawn first (for a depth test of GL_LESS) or last (for GL_LEQUAL), rather than whichever is in front.

In short: the near distance should be as small as it needs to be, and no smaller.

Okay, got that. Thank you for your help! :slightly_smiling_face:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.