Help with Vector drawing algorithm

I am working on a little vector drawing app, and by vector, I don’t really mean the “vector” in the openGL sense, but in the sense that it is an app that displays vector data (i.e., even though the points are 3D - there are no quads, or triangles or any closed primitives - only points and lines.

What I am trying to figure out is the most efficient way of displaying both points and lines.

The data is coming from a proprietary file format that is super simple - it is nothing but a list of points, and each point has a color and a flag which specifies whether to draw a line from the previous pt. That’s it. Super Simple.

In this application, there will be a user option to turn on “points” so that not only are the lines drawn, but points are also drawn at each vertex that is connected to a visible line.

I have this working perfectly - using immediate mode and a very simple algorithm - here is the pseudo code:

glenable(GL_POINT_SMOOTH);  // I like them round!
for (i = 0; i < frame.points.length; i++) {
    pt = frame.points[i];
    if (pt.blanking) {
    else {
        lineto(pt.x, pt.y, pt.z);
        if (showPoints) {
            glvertex(pt.x, pt.y, pt.z);

*Note: moveto() and lineto() are kinda like macro thingies that wrap a glbegin(mode);glvertex();glend(); together in a macro. My little app is implemented using Max/MSP Jitter - which for those not familiar is a super simple way to get an openGL renderer up and running.

The problem is that this seems like a very inefficient way of showing points - it is doubling the number of vertices. Not only does it feel like it is inefficient, I notice a sharp drop in performance when showing points using this technique. Drawing an image with, say 5000 vertices and not showing points (just lines), I get buttery smooth fps ( > 60) - it has no problem drawing at whatever my drawing loop is set at - however, whenever I turn on showPoints - the fps drops in half, sometimes even down to like < 5 fps if there are a large number of points.

Here is a screen shot of a sample file being rendered with points.

Why is there not a mode like GL_POINTS_AND_LINES?

Any help would be much appreciated…

You’re basically drawing stuff in the slowest possible way. If you’re totally invested in using glBegin/End, then your best bet is this algorithm:

1: glBegin(GL_LINES)
2: Loop over the lines and use glVertex/glColor to draw them.
3: glEnd(GL_LINES)
4: If you're drawing points:
  A: glBegin(GL_POINTS)
  B: Loop over all the lines, and use glVertex/glColor to draw the appropriate points for them.
  C: glEnd(GL_POINTS)

That will at least get past the massive quantity of glBegin work you’re doing.

If you want more performance, you should be looking into storing your vertex data in buffer objects. The utility of this depends on how frequently you modify your data, and how good your buffer streaming is.

Why is there not a mode like GL_POINTS_AND_LINES?

How would OpenGL know which vertex is a point and which one is a line?

I figured I was doing things the slowest way possible - but I usually live by the rule of get it to work - then optimize, optimize, optimize. It is working now. It is just quite slow when I turn on show points.

If you’re totally invested in using glBegin/End, then your best bet is this algorithm:

1: glBegin(GL_LINES)
2: Loop over the lines and use glVertex/glColor to draw them.
3: glEnd(GL_LINES)
4: If you’re drawing points:
A: glBegin(GL_POINTS)
B: Loop over all the lines, and use glVertex/glColor to draw the appropriate points for them.

It’s not quite that simple because any arbitrary point or points in the total set of points can be a blanking point - in which line drawing stops. Lines are not drawn from blanking points. So the algorithm would have to be broken up somehow to allow for these arbitrary discontinuities.

So having a glbegin/glend at every point is not efficient, but it makes the algorithm quite a bit simpler.

That will at least get past the massive quantity of glBegin work you’re doing.

Being quite new to OpenGL, I have no intuition on how glbegin/glend groups effect performance. I am not seeing a big performance issue unless I turn on the draw points code.

How would OpenGL know which vertex is a point and which one is a line?

A very simple (and I would argue intuitive) algorithm would be to draw any vertex that has a line connected to it. Again, in my data set, there are points (sometimes many in a row) which do not have lines attached to them.

If you want more performance, you should be looking into storing your vertex data in buffer objects. The utility of this depends on how frequently you modify your data, and how good your buffer streaming is.

The data is certainly changing - the rate at which the data is changing is controllable by the user, i.e., each data set represents a “frame” of a vector animation. The number of points per frame is limited (due to physical constraints of the system that is being simulated) to less than about 5000 points per frame - and usually much much less than that… less than 1000 or so.

I have started reading about VBOs, however one of the intimidating issues I have come across is that there seems to be no “easy” way of drawing points. It seems like you have to implement these somehow in a shader - and though I have seen examples of several ways to do this, none of them seem at all straight forward to someone just diving into OpenGL. Also there seems to be no generic way of drawing points using the VBO route - though, again, I am still learning about all of this.

Also, to go down the VBO path, it seems I will have to break up my gigantic set of points into multiple arrays of points with each array being the set of points that have lines between them. So the algorithm becomes a bit more complex, where I iterated over all the points to break the set up into multiple continuous sets, save these arrays, then assign the arrays to a VBO list - and then figure out how to also draw points at each vertex. Definitely doable… but I started with simple first.

Yes, it is that simple.

You haven’t explained your algorithm very well, and your code is missing critical information for me to divine it. Specifically, I don’t know exactly what “moveto” and “lineto” do in terms of OpenGL calls. You gave a general gist, and the names are suggestive of certain behavior, but I simply can’t say for certain exactly how your code is supposed to work.

So I’m going to explain what I think your algorithm is, then I’ll explain how to turn this into an algorithm that works with exactly two glBegin calls. Without any changes to your data structures. I will also make a distinction between a “vertex” (a position you send to OpenGL with glVertex) and a “point” (rendering a position using GL_POINTS).

It seems to me that you are drawing sequences of line strips. And you define these line strips by a list of vertices. A particular vertex in that list can represent the start of a new line strip; you seem to call these “blanking” vertices. A blanking vertex only connects to the next vertex in the sequence, not to the previous one. While a non-blanking point connects to both the previous and the next.

When rendering points along with lines, you simply draw a point on the non-blanking vertices.

On the assumption that this is your algorithm, then the algorithm is quite trivial. Instead of rendering with GL_LINE_STRIP, render with GL_LINES. All non-blanking vertices should be sent with two glVertex calls; yes, you send the same vertex twice. This is because it represents the end of the previous line and the start of the next (you should call glColor before the two glVertex calls, in order to match your current code). All blanking vertices (and the first vertex ought to be a blanking vertex) should be sent once, as it only represents the start of the next line.

That’s the “Loop over the lines and use glVertex/glColor to draw them” part. To draw the points, loop over all the vertices and draw a point for every non-breaking vertex.

Alternatively, just use a data structure that actually stores the lines and points, rather than trying to interpret a point field as some sequence of lines based on whether it is a “blanking” point or not. If you need to present a different interface based on this notion of “blanking” points, then you need should translate those “blanking” points into sets of points and lines.

And that is as expected. Turning on point rendering effectively doubles the number of glBegin calls you have.

OpenGL ultimately has to serve many needs, not those of any one user. That’s why it doesn’t do things in a way that is convenient for you. It does things in a general way, and each user matches their data structures and algorithms to that.

There are many ways in which you could do this. Any remotely-efficient solution will forego glBegin/glEnd in favour of vertex arrays (glDrawArrays or glDrawElements). You will need at least two draw calls: one for the lines, one for the points. The points are straightforward: either a single glDrawArrays() call for the entire vertex array (if all points are drawn) or a single glDrawElements() call (if some of the points are omitted).

For the lines, the solutions include
[li] glDrawElements(GL_LINES), with interior points (those which have a line segment on both sides) sent twice (they don’t have to be duplicated within the vertex array; only the index needs to be duplicated within the element array). Requires OpenGL 1.1.
[/li][li] glMultiDrawElements(GL_LINE_STRIP). Requires OpenGL 1.4.
[/li][li] glDrawElements(GL_LINE_STRIP) with primitive restart (glEnable(GL_PRIMITIVE_RESTART) and glPrimitiveRestartIndex). Requires OpenGL 3.1.

Modern hardware is optimised for a relatively small number of operations or “batches”, each of which draws many primitives. For the legacy API, each glBegin/glEnd pair constitutes an operation. For the modern API, each glDraw* call constitutes an operation. The setup time for each operation is quite substantial compared to the rate at which the hardware can render primitives once the operation is underway (the rendering can be parallelised across the thousands of arithmetic units found on modern GPUs; the setup can’t).

You can retain the structure of your existing algorithm, but rather than calling glBegin/glVertex/glEnd as you proceed, you would just append indices to the element arrays. Once all that is done, you execute a small number of OpenGL calls to draw everything.

Also, you state that the vertex data changes each frame. But is it only the vertex positions which change, or also the topology (which points are connected by line segments)? If the topology remains fixed from one frame to the next, you don’t need to rebuild the element arrays each frame; you can just update the vertex array and re-use the existing element arrays.

glBegin/glEnd versus client-side vertex arrays versus VBOs are orthogonal to the use of the fixed-function pipeline versus shaders.

You can use VBOs with the fixed-function pipeline or glBegin/glEnd with shaders.

Even client-side arrays will be much faster than glBegin/glEnd. VBOs will have the advantage of not sending the vertex data twice (once for points, once for lines), but given the relatively small amount of data (5000 vertices) and the fact that it changes each frame, the difference between client-side arrays and VBOs will be less significant than the switch from glBegin/glEnd to vertex arrays.

You can use the vertex data as-is. You just need to construct an element array for the points and another for the lines or line strips.

All very good info… lots to digest.

One last question - which is not really related to the drawing algorithm, but is related to what I am trying to accomplish… Is it possible to get 2d point vertices - i.e. the x & y values after going through the clip and projection matrices? I’ve got a set of 3D points - I can display them using OpenGL. I am allowing the user to rotate the world around using the mouse… now I need a way to take these points after being projected onto the 2D plane and pass them to my thing that can only display 2D points. Is this possible, or should I do all of this projection math separately for the 2D output?

One thing to point out is that the term “blanking” vertex or point is not something I made up. Most hardware vector display devices, be it either a CRT, or in this case a laser projector, draw lines by moving a a physical “thing” from one point to the next (with CRTs that would be electrons - with lasers, that would be photons), and sometimes it is needed to move this physical thing to a point without drawing a line. This is called “blanking”. In laser projectors, x & y coordinates are drawn using mirrors attached to galvanometers. Sometimes you want to move the mirrors to a specific point without the laser being on - this is “blanking”. Also, to get from point a to point b may be physically stressful for these galvos and to reduce this stress, intermediate points are created along a path to the destination point which are used as an aide to guide the galvos to the destination. The physics behind determining what is stressful (i.e., does moving from point a to point b exceed a certain angular velocity of the mirror - is quite interesting - but not at all related to OpenGL).

The real world sure is a lot different than the simulated world!

The legacy OpenGL API has feedback mode (glRenderMode(GL_FEEDBACK)), while the modern API has transform-feedback mode (glBeginTransformFeedback etc).

But you might be better off ignoring those and doing it yourself. Retrieving data from the implementation may well be slower than simply performing the calculations in the application. Also, I wouldn’t be surprised if glRenderMode() is implemented as a software fallback with modern hardware.

Another solution has since occurred to me, which more closely mimics the behaviour of a laser projector: Draw the entire set of vertices as a single line strip, and just change the colour (specifically, the alpha component) to simulate the laser being on or off. This requires blending to be enabled so that the alpha component is used (glEnable(GL_BLEND) and glBlendFunc()), and you also need glShadeModel(GL_FLAT) so that the line colour for each segment is taken from one of its vertices (specifically, the end vertex) rather than being interpolated between the colours of the two vertices.

This allows the lines to be drawn using glDrawArrays, so there is no need to construct an element (index) array.

Whether drawing “off” line segments as transparent is more efficient than simply not drawing them depends upon the proportion of the path (both in terms of the number of vertices and the combined length of the segments) which is off. If the path is mostly “on”, then avoiding the need to provide an element array would outweigh the time spent drawing some invisible lines.

If desired, “off” lines need not be completely invisible, but could be drawn fainter and/or in a different colour.