glDrawElements(GL_TRIANGLES) gives a blank screen but glDrawElements(GL_POINTS) works

I am drawing a 3D point cloud in OpenGL4.5. running on Ubuntu Linux 23.04, code is C++.

I read in a file with the point cloud data in *.ply format. Please see my ply files here:

Code and PointCloud files.

I am very new to OpenGL and trying to debug it. I just learnt today about RenderDoc for debugging. I captured a log but can’t yet figure out how to find anomalies related to the below error I describe. If it helps here’s the RenderDoc log capture: RenderDoc Log Capture

I am using glDrawElements() to draw the point cloud. The problem is that while I can successfully draw a point cloud with glDrawElements(GL_POINTS,…) (right hand side of below screenshot)…

I cannot draw (the screen is blank) when I use glDrawElements(GL_TRIANGLES,…) or glDrawElements(GL_TRIANGLE_STRIP,…).

I define one Vertex Array Object (VAO) and Vertex Buffer Object (VBO) which I set up with position(xyzw) data first, then color(rgba) data, then point size.

Here is the code to set up the VAO and VBO.

void setup_VAO_VBO(void)
   // configure global opengl state
   // -----------------------------
   // glEnable(GL_MULTISAMPLE); // enabled by default on some drivers, but not all so always enable to make sure
   glDepthFunc(GL_LESS); // Accept fragment if it closer to the camera than the former one

   //glEnable(GL_BLEND); // to recognise/implement a varying alpha value for r,g,b,alpha vertex colour attribute
   //glBlendFunc(GL_SRC_ALPHA, GL_SRC_ALPHA);

   //glEnable(GL_CULL_FACE); // 
   //glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); //When drawing faces: draw wireframe rather than fill triangle with colour

   // create a Vertex Array Object and set it as the current one.
   // Do this once your window is created (= after the OpenGL Context creation) and before any other OpenGL call.
   glGenVertexArrays(1, &VAO);

   glGenBuffers(1, &VBO1);
   glBindBuffer(GL_ARRAY_BUFFER, VBO1);
   // Allocate memory for both vertex_buffer_data1 and vertex_buffer_data2
                vertex_buffer_data1.size() * sizeof(decltype(vertex_buffer_data1)::value_type), 
                GL_STATIC_DRAW); // mem is allocated here for the buffer size specified
                   sizeof(decltype(vertex_buffer_data1)::value_type) * vertex_buffer_data1.size(), 
                   sizeof(decltype(vertex_buffer_data2)::value_type) * vertex_buffer_data2.size(), 

	// Create an Element Buffer Object that will store the indices array:
   glGenBuffers(1, &EBO);
	// Transfer the data from indices to EBO
					 indices_buffer_data.size() * sizeof(unsigned int), 

   // Attribute binding
   glVertexAttribPointer(0,            // attribute - must match the layout in the shader.
                         VERTEX_SIZE,  // size = 3 for rgb, = 4 for rgba
                         GL_FLOAT,     // vertex data type
                         GL_FALSE,     // normalized?
                         0,            // stride
                         (void *)0);   // array buffer offset

   //std::cout << "sizeof(decltype(color_buffer_data)::value_type) = " << sizeof(decltype(color_buffer_data)::value_type) << std::endl;
   glGenBuffers(1, &VBO2); // color buffer
   glBindBuffer(GL_ARRAY_BUFFER, VBO2);
   glBufferData(GL_ARRAY_BUFFER, sizeof(decltype(color_buffer_data)::value_type) * color_buffer_data.size(), &color_buffer_data[0], GL_STATIC_DRAW); // mem is allocated here for the buffer size specified   // VBO2 is used for a color attribute per vertex
   glVertexAttribPointer(1,           // attribute - must match the layout in the shader.
                         4,           // size = 3 for rgb, = 4 for rgba
                         GL_FLOAT,    // type
                         GL_FALSE,    // normalized?
                         0,           // stride
                         (void *)0);  // array buffer offset

   glGenBuffers(1, &VBO3); // point size buffer
   glBindBuffer(GL_ARRAY_BUFFER, VBO3);
   glBufferData(GL_ARRAY_BUFFER, sizeof(decltype(pointsize_buffer_data)::value_type) * pointsize_buffer_data.size(), &pointsize_buffer_data[0], GL_STATIC_DRAW); // mem is allocated here for the buffer size specified
   // VBO3 is used for a point size attribute per vertex
   glVertexAttribPointer(2,            // attribute - must match the "layout" in the shader.
                         1,            // size = 1 float (depicting a point size)
                         GL_FLOAT,     // type
                         GL_FALSE,     // normalized?
                         0,            // stride
                         (void *)0);   // array buffer offset


The draw part of the code looks like this:


  // Loop until the user closes the window
  while (!glfwWindowShouldClose(window1))
    //Note the Vertex Array Object (VAO) is bound in the initialise() function and never unbound so               
    //I don't bind it again in this while loop. Could that be a source of the error? Am I missing    

     //some_code_to_update_color_buffer_data(); //do some processing and update 
                                               //'std::vector<glm::vec4> color_buffer_data' 
                                               //which holds r,g,b,a values per vertex
     // Update the (vertex) color buffer on the GPU.
           vertex_buffer_data1.size() * sizeof(decltype(vertex_buffer_data1)::value_type), 
           color_buffer_data.size() * sizeof(decltype(color_buffer_data)::value_type), 

     glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear the screen     
     glDrawElements(GL_POINTS, (GLsizei)indices_buffer_data.size(), GL_UNSIGNED_INT, NULL);

     //do some processing and update 'std::vector<float> pointsize_buffer_data'
     //which holds a float per vertex.
     // Update the (vertex) point sizes on the GPU.
           (vertex_buffer_data1.size() * sizeof(decltype(vertex_buffer_data1)::value_type))
           + (color_buffer_data.size()) * sizeof(decltype(color_buffer_data)::value_type), 
           pointsize_buffer_data.size() * sizeof(decltype(pointsize_buffer_data)::value_type),

     //do some processing and update 'std::vector<glm::vec4> vertex_buffer_data1'
     //which holds x,y,z,w values
     // Update the (vertex) positions on the GPU.                                            
           vertex_buffer_data1.size() * sizeof(decltype(vertex_buffer_data1)::value_type), 

     glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear the screen           

     if(drawTriangleFaces == false) //a flag which the user can toggle from keyboard input
           //this draw call works for GL_POINTS but not for GL_TRIANGLES
           glDrawElements(GL_POINTS, (GLsizei)indices_buffer_data.size(), 
                          GL_UNSIGNED_INT, nullptr); 
     {//*************this doesn't draw anything - screen is blank*************
        //this draw call works for GL_POINTS but not for GL_TRIANGLES nor GL_TRIANGLE_STRIP
        glDrawElements(GL_TRIANGLES, (GLsizei)indices_buffer_data.size(), 
                       GL_UNSIGNED_INT, nullptr);



This is my vertex and Fragment shader code (I have them in one file).

#shader vertex
#version 450 core

in vec4 vertexPosition_modelspace;
in vec4 vertColor;
in float vertexPointSize;

uniform mat4 mvpMatrix;

out vec4 fragColor;

void main()

  gl_Position = mvpMatrix * vertexPosition_modelspace;

  gl_PointSize = vertexPointSize; 

  fragColor = vertColor;


#shader fragment
#version 450 core

in vec4 fragColor;

out vec4 color;

void main()
  // shape the vertices to be round
  vec2 round = 2.0 * gl_PointCoord - 1.0;
  if (dot(round, round) > 1.0) 

  color = fragColor;


I need to draw the point cloud both with and without the mesh (triangle faces).

Any suggestions?


  • The triangle faces are read in from file. I have checked that they are read in correctly. I can view the triangle faces (mesh) if I load the point cloud into an app like Meshlab.

I am using a a few libraries: VCG Lib to read in the Point Cloud, OpenGL to Render and OpenCV to do some image processing.

I am new to graphics and OpenGL. I’m using it to demonstrate a thesis. Any help appreciated. Thanks.

I am beginning to suspect that the use of a buffer for varying the point size is somehow not compatible with glDrawElements(GL_TRIANGLES,…).

If anyone knows for sure please advise, especially if there is some way to vary the point size (e.g. points closer to the camera to be bigger) from the CPU side (rather than in the shader) AND also using glDrawElements(GL_TRIANGLES,…).

Note that user input from keyboard sets a flag which switches the draw call between glDrawElements(GL_TRIANGLES,…) and glDrawElements(GL_POINTS,…). As shared above the glDrawElements(GL_POINTS,…) draw call works fine. In my case I am using this conditional draw set up to compare point based rendering to mesh based rendering (after doing some image processing).

Change the fragment shader. gl_PointCoord is undefined when drawing primitives other than points; it’s quite likely that the discard statement is being executed for all fragments.

Other than that, rendering a point cloud as triangles isn’t likely to produce acceptable results, although it shouldn’t produce a blank screen. There are libraries which will try to guess the topology from a point cloud (surface reconstruction algorithm), and they work reasonably well provided that the sample spacing is small compared to the feature size of the surface. Search for “point cloud to mesh” or “mesh from point cloud”.

Thank you for that. Saved me lots of time.

I removed all reference to gl_PointSize (in code, in shaders) and it worked - glDrawElements(GL_TRIANGLES, …) drew the mesh as expected.

I didn’t find any reference to this is the online documentation though. Did I perhaps miss it somehow?

The online reference page for gl_PointCoord says:

Ah, noted. I was referring to glDrawElements() - that was a new addition to the code.

Only realised after a little head scratching that my problem was coming from use of point primitive commands not compatible with glDrawElements(GL_TRIANGLE,…). But I would have not thought to check the fragment shader, so thanks again for that.