How to draw indexed vertices with primitive restart

We have the following code trying to draw many polygon. The point data is stored xyxyxy…

glGenVertexArrays(1, &vao);  // Create the container for all vbo objects
  glBindVertexArray(vao);

  // push points up to graphics card as two separate vbo for x and y
  glGenBuffers(1, &vbo);
  glBindBuffer(GL_ARRAY_BUFFER, vbo);
  glBufferData(GL_ARRAY_BUFFER, numPoints * (2 * sizeof(float)),
               bml->getPoints(), GL_STATIC_DRAW);
  // Describe how information is received in shaders
  glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, (void*)0); //xy into shader

  // Create a buffer object for indices of lines
  uint32_t numSegments = bml->getNumSegments();
  constexpr uint32_t endIndex = 0xFFFFFFFF;
  const uint32_t numIndices = numPoints + numSegments;
  uint32_t* lineIndices = new uint32_t[numIndices];
  for (uint32_t i = 0, j = 0, c = 0; i < numSegments; i++) {
    for (uint32_t k = 0; k < bml->getSegment(i).numPoints; k++)
      lineIndices[c++] = j++;
    lineIndices[c++] = endIndex;
  }
// lineIndices = 0 1 2 3 ... 19 0xFFFFFFFF 20 21 22...  0xFFFFFFFF ....

  glGenBuffers(1, &lbo);
  glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, lbo);
  glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLuint) * numIndices,
               lineIndices, GL_STATIC_DRAW);

I am a rank beginner on shaders, so the first question is whether this shader correctly matches the code. There is no matching input for the indices but indices do not seem to be used in the vertex shader

#version 330 core
layout (location = 0) in vec3 aPos;   // the position variable has attribute position 0

uniform mat4 projection;
uniform vec4 solidColor;

out vec4 ourColor; // output a color to the fragment shader

//Add a uniform bool to check if to use vertColor or Solid color
void main() {
	gl_Position = projection*vec4(aPos, 1.0);
	ourColor = solidColor; // set ourColor to the input color we got from the vertex data
}

Second question, when nothing shows on the screen debugging is hard. Is there any way to use old immediate mode commands to display, to test out the projection? How would I unspecify the shader, or specify a default shader so I can just test the matrix by itself. Something like:

glBegin(GL_QUADS);
  glColor3f(1, 0, 0);  // red
  glVertex2f(0, 0);
  glVertex2f(200, 0);
  glVertex2f(200, 200);
  glVertex2f(0, 200);
  glEnd();

You don’t actually need to, because immediate mode works with shaders.

It’s a common mistake to think that it doesn’t, but both immediate mode and shaders can and will work together just fine, and it can be a useful combination for rapid prototyping or testing.

In your case the changes required would be very minimal. First change is to change your “#version 330 core” directive to just “#version 330” - this may not actually be necessary but strictly speaking immediate mode is deprecated in core profiles, so better to just be safe.

Second change is instead of glVertex2f (x, y) use glVertexAttrib2f (0, x, y) - this is the immediate mode equivalent of setting a generic vertex attribute at location 0 - which is exactly what you want. Again, in theory generic attribute location 0 should alias to position on most if not all drivers, but when it’s just as easy to specify it explicitly it does remove some margin for doubt.

Otherwise, drop your VAO and buffer object setup, and that should be all that you require - you can now use the very same shader and same uniforms but with immediate mode.

To actually disable the shader and revert to fixed-function vertex and fragment processing, use glUseProgram (0) - do that and you can go back to the old matrix stack and glTexEnv/etc calls.

The problem is the code isn’t drawing anything, making it hard to debug.

We have numbers in the x from -179 to 179. In the y from 20 to 70.

We want to project those on the screen which is 1024x1024

The viewport is glViewport(0,0,vpW,vpH);

We set the color (1,0,0) and transform:

  Shader* shader = Shader::useShader(GLWin::COMMON_SHADER);
  shader->setVec4("solidColor", style->getFgColor());
  shader->setMat4("projection", transform * *parentCanvas->getProjection());

The entire matrix should be M * V * P but don’t see that anywhere.

What we can see is the matrix in the Canvas class that does the viewport.

{    value = {{{x = 0.00104166672, r = 0.00104166672, s = 0.00104166672}, {y = 0, g = 0, t = 0}, {z = 0, b = 0, p = 0}, {
            w = 0, a = 0, q = 0}}, {{x = 0, r = 0, s = 0}, {y = -0.00133333332, g = -0.00133333332, t = -0.00133333332}, {z = 0,
            b = 0, p = 0}, {w = 0, a = 0, q = 0}}, {{x = 0, r = 0, s = 0}, {y = 0, g = 0, t = 0}, {z = -1, b = -1, p = -1}, {w = 0,
            a = 0, q = 0}}, {{x = -1, r = -1, s = -1}, {y = 1, g = 1, t = 1}, {z = 0, b = 0, p = 0}, {w = 1, a = 1, q = 1}}}}

The debugger makes that hard to read.

    0.00104166672      0       0    0
    0         -0.00133333332   0    0
    0          0                          -1   0
    -1        1                           0   1

We were trying:

    transform = glm::translate(transform, glm::vec3(-bounds.xMin, 0, 0));
    transform = glm::scale(transform, glm::vec3(20, 20, 1));

    transform * *parentCanvfas ->getProjection()

Can you explain what the transform should be in order to make the image fill the viewport, but more important, how to view the matrices and transformation so we can debug the situation next time?

Where did you get all of this code then?

Especially when learning, start with something that works and add things to it until it “breaks”. Then you’ll have some idea what the cause is and you can investigate that specifically.

Alternatively, cut your program back until it works. However, if you’re new at this and learning the domain, you run some risk that you’ll never get it working because something is fundamentally wrong with your approach at the lowest level.

So I’d recommend putting this code aside for a minute. Grab a simple OpenGL program that does draw something in the window with a GLSL shader program. And gradually “enhance” it with your additions above. When it breaks, you’ll know what you did to cause the problem. You can research that. And if you fail to find a reasonable explanation, that’ll give you a targeted question to ask here on these forums.

We finally figured out the coordinate problems, but I’m still looking for techniques to debug when something goes wrong. I wrote all the code with a team of students. The canvas class is designed to draw pages in web integer coordinate style. It defines a viewport, and uses glm::ortho as follows:

projection = glm::ortho(0, w, h, 0);

What we wanted to do was to transform into longitude (-180…180 and latitude (in this case 20…90).

glm::mat4 t(1.0);
t = glm::translate(t, vec3(180, 0,0));
t = glm::scale(t, vec3(2,2,1));
shader->setMat4("projection", *parentCanvas->getProjection() * t);

When it failed, I didn’t know if the order was wrong, should I have pre-multiplied, and what I wanted was to be able to see what was happening to the points.

In the end, we set the projection ourselves

double ySize = parent->getHeight();                                                                                                                       
double xSize = parent->getWidth(); 
double shiftX = -bounds.xMin * xSize / (bounds.xMax - bounds.xMin);                                                                         
double shiftY = ySize + (bounds.yMin * ySize / (bounds.yMax - bounds.yMin));                                                                
double scaleX = xSize / (bounds.xMax - bounds.xMin);                                                                                        
double scaleY = -ySize / (bounds.yMax - bounds.yMin);                                                                                       
transform = glm::mat4(1.0f);                                                                                                                │
transform = glm::translate(transform, glm::vec3(shiftX, shiftY, 0));                                                                        
transform = glm::scale(transform, glm::vec3(scaleX, scaleY, 0));
shader->setMat4("projection", *parentCanvas->getProjection() * transform);

But I don’t understand why the projection in the Canvas, the output of glm::ortho does not already multiply by the pixel sizes. Here we are scaling that ourselves.

Incidentally, we have an interesting performance twist, the entire GIS file is designed to be loaded in a single read, we can load 39Mb in somethign like 24ms. We are evaluating algorithms to rapidly tesselate, and might precompute the tesselations and store the indices of the polygons to cut if that can be done without too much space. The help with primitive restart was great, thanks!

A related problem, which we still have a problem with, is getting the mouse click, finding what point corresponds on the map, and centering on that spot.

I see glm::unProject and will try, but it would help to understand how the projection is being built.

That canvas projection transform is somewhat contorted (which is probably why you’re tripping over it).

First, in general a PROJECTION transform has absolutely nothing to do with pixels – at all. It selects the portion of WORLD-SPACE that gets mapped into the CLIP-SPACE clipping box. We’re not even talking about pixels here.

After the PROJECTION transform is applied to your coordinates, and clipping is performed, and the perspective divide is performed, then your visible scene is in the NDC cube (-1…1). Still no mention of pixels here.

Once we have that, then the VIEWPORT transform is applied to it. This is what scales and shifts your NDC cube coordinates into pixel coordinates. Now we have pixels.

Now let’s rewind…

This, if really what’s used, presumes that your WORLD-SPACE coordinates lie in the range X=0…w and Y=h…0. So if w and h are supposedly in “pixels”, then this whole thing presumes that your input WORLD-SPACE coordinates are in pixels. That’s a bit weird. If they’re not, it makes you do contorted math to force your WORLD-SPACE to be in pixels to use this PROJECTION transform. Moreover, it makes assumptions about what the VIEWPORT transform is actually set to for this to actually map properly to real screen-space pixels. Also kinda weird.

In short, that canvas glm::ortho setting makes the assumption that your app is computing pixel coordinates directly and feeding that into the pipe. If you’re not, it’s just in the way.

Better approach: Get rid of this oddball PROJECTION matrix setting and use the PROJECTION matrix and VIEWPORT transform that makes more sense for your problem.