Update VBO from thread (Qt)

HI everyone,

I have a render class in my project where I display a 2D grid, and on a different thread I receive every second 2240*4096 “colors”.

My goal is to fill up a “VBO_COLORS” from the treatment thread in order to change the colors of my grid.
Also, I need to update this VBO every second.

Do you have any suggestions or “method” ? Thank you

receiving 2240x4096 colours on a different thread on the CPU is quite intensive to be doing, however the way you would do this is have your glNamedBufferStorage with the VBO and make sure that it is set to GL_DYNAMIC_STORAGE_BIT, or if you are using glNamedBufferData or equivalent GL_DYNAMIC_DRAW. To update this buffer there are two methods either map the buffer using glMapNamedBuffer / glMapNamedBufferRange. Where this gives you a pointer to memory that is mapped to the GPU and fill the data up as usual either using memcpy/memove or just update the values there directly then when ready to give to the GPU use glUnmapNamedBuffer. The other method is to use glNamedBufferSubData. Now I am warning you now that having 2240x4096 colours and updating them to VBO and processing them on the CPU is very intensive. You would be better of using a texture 2240x4096 and perhaps instead of having a separate thread you could use a shader to handle that processing but just a suggestion and I don’t know the specific requirements.

1 Like

Thank you for your help TooGood

  1. You’re right, while using glMapBuffer I noticed that my app wasn’t able to run.
    So here’s what I did :
  • Treatment thread : Store my 2240*4096 colors in a std::vector which is passed to a concurrentQueue extern => Every second when the std::vector reaches maximum size, the concurrentQueue receives the colors now available to the OpenGL class
  • OpenGL Class : Timer initialized (1s) that calls the method updateVBOColours => Every second we check the concurrentQueue to obtain the new colors into the VBO_COLORS
  1. You mentioned glMapBuffer and glBufferSubData but as I understand it you should only use them if you have not that much data to update. In my case I thought that was better to reduce the number of update at once per second, and so I don’t use glBufferSubData. glMapBuffer would be a good idea (by using the pointer as you said) but the version that I’m using doesn’t let me acces to this function, and to be fair I don’t trust the way I did use the glMapBuffer. So here’s what I did :
    glBindBuffer(GL_ARRAY_BUFFER, VBO_COLORS);
    glBufferData(GL_ARRAY_BUFFER, MyStdVectorStoredInConcurrentQueue.size * sizeof(float), MyStdVector.data(), GL_DYBAMIC_DRAW);

  2. I’m interested in using correctly the texture and the shader but at that moment I have 0 knowledge. Also I receive some data to treat in my other thread to get the colours and that’s why I thought that would be a good idea to transfer my colours into a VBO to give that to a shader to associate the colours to the right cell faster, isn’t it ? Cocerning the texture I should look for some explanations on How to set the right method, or at least a “pattern”.

Thanks again

the way glBufferData works is essentially like malloc or the new keyword it’s going to deallocate then allocate memory which is why you shouldn’t use glBufferData. I checked docs.gl and glMapBuffer is available since OpenGL 2.0. glBufferSubData has also been available since OpenGL 2.0. These functions don’t reallocate memory they act on the existing memory which is why it’s quicker. If using these are still slow we can look into parralizing the work load more either on CPU or GPU. Also you mentioned you were using glMapBuffer wrong, I would be happy to help if you showed me what you were doing and maybe I can give advice.

1 Like

I also forgot to explain

as I understand it you should only use them if you have not that much data to update

this is completely false in fact its quite the opposite, if you have quite a large vertex buffer, lets say 100mb for example. every time you call glBufferData you are allocating memory then filling the memory with data. whilst with map and sub data you are just filling the memory with data no allocations required. Think about like like calling new and delete every time you want to update memory in c++. why would you call new and delete when you can directly edit the existing memory with different values.

You are correct that glBufferData creates a fresh buffer, whereas glBufferSubData replaces the contents of an existing buffer.
However, in the case of very frequent updates to a buffer, the performance characteristics of these operations become tricky: the GPU is using the buffer contents for rendering after all, so if you replace them that introduces the potential of having to synchronize with the GPU to ensure it is not trying to access the buffer contents you are about to replace. Short version: it’s complicated, for a lot of details about this see the wiki page on Buffer Object Streaming.

1 Like

Ok I got it now ! I see why glBufferData is not adapted for real time app.

So if I’m not lost, before starting on how to implement the texture, here’s how I use glMapBuffer :
(Remark : glMapBuffer was missing with Qt Creator 4.8 and my OpenGLContext, so I had to find a way to use it)

  1. Declaration on helloglwidget.h :
#include QOpenGLWidget
#include QOpenGLFunctions
typedef void* (*PFNGLMAPBUFFERPROC) (GLenum target, GLenum access);
PFNGLMAPBUFFERPROC glMapBuffer;
  1. Initiallize on helloglwidget.cpp :
void helloglwidget::initializeGL(){
initializeOpenGLFunctions();
glMapBuffer = reinterpret_cast<PFNGLMAPBUFFERPROC>(QOpenGLContext::currentContext()->getProcAddress("glMapBuffer"));
/*... Others ...*/
const size_t max = 2240*4096*sizeof(float);
glGenBuffers(1,&VBO_COLORS);
glBindBuffer(GL_ARRAY_BUFFER,VBO_COLORS);
glBufferData(GL_ARRAY_BUFFER,max, nullptr, GL_DYNAMIC_DRAW);
glBindBufffer(GL_ARRAY_BUFFER,0);
}
  1. Using in the update method (connected to a timer of 1second) on helloglwidget.cpp :
void helloglwidget::updateVBOColors()
{
std::vector<float> colors;
while(colorsqueue.try_dequeue(colors))
 {
glBindBuffer(GL_ARRAY_BUFFER,VBO_COLORS);
void* ptr = glMapBuffer(GL_ARRAY_BUFFER,GL_WRITE_ONLY);
if(ptr)
  {
memcpy(ptr,colors.data(),colors.size()*sizeof(float));
glUnmapBuffer(GL_ARRAY_BUFFER);
  }
 }
update(); //Call paintGL
}

If you all think it’s okay, do you know how I can implement a polar grid 2D with the texture ? Many thanks

the GPU is using the buffer contents for rendering after all, so if you replace them that introduces the potential of having to synchronize with the GPU to ensure it is not trying to access the buffer contents you are about to replace.

Right, but instead of synchronizing with the GPU instead you have introduced more work load. I had a problem of synchronizing and its as simple as introducing something called double/tripled buffering which I am sure you are aware of. This is a very simple solution to implement and it even talks about it in the article you linked.

This solution is fairly simple. You simply create two or more buffer objects of the same length. While you are using one buffer object, you can be modifying another.

yes this is the correct usage of mapping, if I was being slightly pedantic I would perhaps move the data into the mapping instead of copying it but that’s not really important.

do you know how I can implement a polar grid 2D with the texture ? Many thanks

I am not very familiar with a polar coordinate system which is what I think you are talking about (I am still in sixth form or high school for American’s) but from what I understand is its a point on a plane with a distance from an arbitrary origin and an angle from an arbitrary direction.

Now from my understanding what I would do is the texture has 2D coordinates, each cell in the grid will have a coordinate and that can be converted from cartesian coordinates to polar. This article talks about this Polar coordinate system - Wikipedia.
If needed you can also convert the 2D coordinates to world space before you do this conversion all you need is the inverse of the view and projection matrix. and since the grid is in 2D you can assume a depth of 1.0 for all points.

of course you can do this processing in a compute shader and directly edit values in a texture which will be much quicker than using the CPU to do this conversion.
https://www.khronos.org/opengl/wiki/Compute_Shader
this article what really nice to explain compute shaders to me and there are also many videos online showing how to use them.

Again I am not sure if I am missing any details but if you learn how to use compute shaders I am sure the logic of what you were doing before can be applied there. I am also a little confused as to why you need such a large grid but if its necessary this is something that can benefit from GPU parallelization.

if I was being slightly pedantic I would perhaps move the data into the mapping instead of copying it

I agree with you but I want to optimize the program at the end (I wanted to use std::move) :cowboy_hat_face:

I am not very familiar with a polar coordinate system. Now from my understanding what I would do is the texture has 2D coordinates, each cell in the grid will have a coordinate and that can be converted from cartesian coordinates to polar.

The Polar Coordinate system lets you express a point (x;y) as an angle and a distance :

  • Radius r : distance between the origin and your point A
  • Angle θ : angle between the reference axis and point A
    In fact, we use these coordinates when we need to analyze/use the motion of an object, to solve certain equations, a lot in aviation systems (like a radar that determines the position of a Plane(Radius, θ) in the sky). Why do we use it ? To stay accurate :wink:

If needed you can also convert the 2D coordinates to world space before you do this conversion all you need is the inverse of the view and projection matrix. and since the grid is in 2D you can assume a depth of 1.0 for all points.

I did something similar to display my grid. When the program is starting a method is called during the initialization in order to calculate the cartesian coordinates of each cell of my grid. Once done we simply display these coordinates which make a circle. Here’s how it works :

  1. calculateRadarData() : Method called in initializeGL() where we calculate the cartesian coordinates of cells
  2. initializeGL() : Storage in VBO_GRID the cartesian data
  3. paintGL() : Display the VBO_GRID
void HelloGLWidget::calculateRadarData()
{
    radarData.resize(AZIMUTS, std::vector<QVector3D>(CELLS_PER_AZIMUTS));
    for (int azimut = 0; azimut < AZIMUTS; azimut++)
    { //For each azimut (angle) of my circle
        double theta = (azimut*2*M_PI)/AZIMUTS; //Conversion azimut to radians
        for(int cell = 0; cell<CELLS_PER_AZIMUTS; ++cell)
        { //for each cell of each azimut
            double D = CELL_DUR * (START_RG+cell) * c/2.0; //get the distance between the center and the cell
            double x = D*cos(theta); //get the x
            double y = D*sin(theta); //get the y
            radarData[azimut][cell] = QVector3D(x,y,-2); //storage in radarData with a depth of -2 (why not haha)
        }
    }
}
void HelloGLWidget::initializeGL()
{
    initializeOpenGLFunctions();
    glMapBuffer = reinterpret_cast<PFNGLMAPBUFFERPROC>(QOpenGLContext::currentContext()->getProcAddress("glMapBuffer"));

    glEnable(GL_DEPTH_TEST);
    glClearColor(0.0f,0.0f,0.0f,1.0f);

    shaderProgram.addShaderFromSourceFile(QGLShader::Vertex, ":/vertexShader.vsh");
    shaderProgram.addShaderFromSourceFile(QGLShader::Fragment, ":/fragmentShader.fsh");
    shaderProgram.link();

    glEnable(GL_MULTISAMPLE);
    glEnable(GL_LINE_SMOOTH);
    glHint(GL_LINE_SMOOTH_HINT,GL_NICEST);

    calculateRadarData(); //Get the cartesian coordinates of cells

    /* ....VBO_COLORS part ....*/

    /*VBO POLAR 2D GRID*/
    glGenBuffers(1,&VBO_GRID);
    glBindBuffer(GL_ARRAY_BUFFER,VBO_GRID);

    /*Move of the cartesian coordinates into the VBO_GRID*/
    std::vector<GLfloat> vertexData;
    for(const auto& azimut : radarData)
    {
        for(const auto& point : azimut)
        {
            vertexData.push_back(point.x());
            vertexData.push_back(point.y());
            vertexData.push_back(point.z());
        }
    }
    glBufferData(GL_ARRAY_BUFFER, vertexData.size()*sizeof (GLfloat), vertexData.data(), GL_STATIC_DRAW);

}
void HelloGLWidget::paintGL()
{
   glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    QMatrix4x4 mMatrix;
    QMatrix4x4 vMatrix;

    shaderProgram.bind();
    shaderProgram.setUniformValue("mvpMatrix", pMatrix * vMatrix * mMatrix);
    shaderProgram.setUniformValue("color", QColor(Qt::blue));

    glBindBuffer(GL_ARRAY_BUFFER,VBO_GRID);
    int vertexLocation = shaderProgram.attributeLocation("vertex");
    shaderProgram.enableAttributeArray(vertexLocation);
    glVertexAttribPointer(vertexLocation,3,GL_FLOAT, GL_FALSE,0,0);

    glLineWidth(0.01f);
    glDrawArrays(GL_LINE_LOOP,0,AZIMUTS*CELLS_PER_AZIMUTS);

    shaderProgram.disableAttributeArray(vertexLocation);

    shaderProgram.release();

}

To be honest I don’t know how to use the shaders to calculate from one of them all the 2D grid and make a texture from that by letting me “access” the color of cells that sould be update with the VBO_COLORS. I will check the link you’ve sent and show you what I can do.

Again I am not sure if I am missing any details but if you learn how to use compute shaders I am sure the logic of what you were doing before can be applied there. I am also a little confused as to why you need such a large grid but if its necessary this is something that can benefit from GPU parallelization.

I need to quickly create a kind of radarView. I receive some data that I decode for display. But as you’ve seen I’m absolutely a beginner in OpenGL (even if I spent just a few hours on it this month), also I have nobody around me to help me with OpenGL.
If you’re interested in the project I can share it :wink:

Now this is what I will try to do :

  1. Create a shader where I determine the polar grid 2D (to make it faster)
  2. Create a texture ?
  3. Associate the colors in the VBO_COLORS to each cells
    I would be happy to hear some other advice, many thanks again !
1 Like

I agree with you but I want to optimize the program at the end (I wanted to use std::move) :cowboy_hat_face:

Seems reasonable you can also use memmove instead of memcpy which will do the same thing.

In fact, we use these coordinates when we need to analyze/use the motion of an object, to solve certain equations, a lot in aviation systems (like a radar that determines the position of a Plane(Radius, θ) in the sky). Why do we use it ? To stay accurate :wink:

I see that makes sense.

I need to quickly create a kind of radarView. I receive some data that I decode for display. But as you’ve seen I’m absolutely a beginner in OpenGL (even if I spent just a few hours on it this month), also I have nobody around me to help me with OpenGL.
If you’re interested in the project I can share it :wink:

this project seems interesting and I would be happy to help, the way you are creating a circle seems to work fine but could be expensive, there is a simpler way I can show you. All you need is a quad where the normalized vertices are between -1,-1 and 1.0,1.0. You can either set the vertices yourself or just transform it to that in the fragment shader. Once you have drawn a quad onto the screen in the fragment shader make sure you can access the vertex the shader is currently drawing.

//here remember that VertexPosition should be in range from -1.0 to 1.0 so if your vertex positions
//are in range from -0.5 to 0.5 or anything else remember to convert it.
float InvertedDistance = 1.0 - length(VertexPosition);
//we minus the length from 1.0 to invert the distance so that  as you go further 
//out from the center the Distance becomes smaller.
float Alpha = 0.0;
if(InvertedDistance > 0.0) //we are inside the circle
    Alpha = 1.0;

FragColor.rgb = color; //whatever color you want/texture
FragColor.a = Alpha;

Now that we have our main circle for the radar we can now move onto the actual grid. So assuming we have some points that we would like to display on our radar we need to transform the points to Cartesian coordinates then transform the points in range -1.0 to 1.0. So assuming we have some object with coordinates x,y using your method you mentioned previously we can convert those coordinates to Cartesian

double D = CELL_DUR * (START_RG+cell) * c/2.0; //get the distance between the center and the cell
double x = D*cos(theta); //get the x
double y = D*sin(theta); //get the y
radarData[azimut][cell] = QVector3D(x,y,-2);

now we should convert these coordinates to range 0.0 to 1.0 by getting the maximum distance between two points either by setting it arbitrarily or calculating it. Then we can simply multiply by 2.0 and minus to to get it in the range -1.0 to 1.0. Then that’s it we have a coordinate where we can draw the point on our radar! each pixel represents a sort of cell so if you wanted to render that point onto the circle simply make sure both the object and the point have the same transform then use that coordinate when rendering.

After realising what you were actually trying to do I quickly realized a compute shader is not useful here so hopefully I have given you a better direction to work towards. If you need any help with the shaders or if I have missed out on anything let me know.

That’s a good point the perspective you got for the project, but I don’t think that is the right solution to display the cells.

There are 2 points I didn’t mention :

  • VBO_COLORS receives an intensity of colors, not directly rgb colors as you thought, that was for me the best way to make it faster => That means we have to control the intensity of a standard color (yellow why not) for each cell. Once the cells displayed, that will create a proper image
  • The cells in polar coordinates are not a perfect square. In fact : The closer you are to the radar, the longer and thinner the cell. The further away from the radar, the wider and shorter the cell => So I don’t think that trying to display each cell in a single pixel will work. (I cannot introduce link but if you write radar cells on google image you can see the distorsion)

I started to implement your last idea, but knowing these points now, do you still think we should proceed this way ?

Many thanks

  • The cells in polar coordinates are not a perfect square. In fact : The closer you are to the radar, the longer and thinner the cell. The further away from the radar, the wider and shorter the cell => So I don’t think that trying to display each cell in a single pixel will work. (I cannot introduce link but if you write radar cells on google image you can see the distortion)

ahh i see now when i first thought of a radar i thought you were going for this

but now I realise what you really want is this

if this was me I would just use a texture to display the radar but I am guessing you want the number of cells to be customizable but I have noticed a pattern with these radars. If you look they are really just made up of inner circles then from the beginning circle lines are drawn out once every 10 degrees for this image and that produces the cell distortion you are looking for. I definitely think a compute shader wouldn’t be wise for this I would go along the route of a fragment shader/geometry shader.

we can produce the first inner circle with the same method highlighted previously, as for the rest they are outlines of circle. We can produce these outlines of circles similar to the method before but I am going to introduce something called a step function.

// -1 -> 1 local space, adjusted for aspect ratio
vec2 uv = VertexPosition * 2.0 - 1.0;
float AspectRatio = Resolution.x / Resolution.y; //resolution of quad
uv.x *= AspectRatio; //make sure the circle isn't distorted
   
float InvertedDistance = 1.0 - length(uv);
vec3 color = vec3(step(0.0, InvertedDistance));
    
//the step function is the same as what we did before
//if(InvertedDistance > 0.0) 
    //Alpha = 1.0;
    
color *= vec3(1.0 - step(0.01, InvertedDistance)); //here we reverse the step function


fragColor = vec4(color, 1.0);
fragColor.rgb *= vec3(0.0, 1.0, 0.0); //colour 

Now we simply draw straight lines outward from the centre circle, you control the number of cells by the number circle and the angle between each line. In the image its 10 degrees but as long as it is a factor of 360 you should get a good looking cells on a radar.
We can present coordinates on the circle the same way we would have done previously just make sure to use the outer circle as your reference when plotting points on it.

I hope this is more helpful on how to implement a radar that your looking for and should be a lot more efficient then drawing each individual cell/circle as most of the work would be done in a fragment shader.

1 Like

One thing I forgot to mention is how to draw the lines, there are a number of ways but the simplest is OpenGL has a primitve just like glDrawArrays(GL_TRIANGLES…) it also has glDrawArrays(GL_LINES…)

Yes !!! That’s a grid like that I’m searching to create. However this is not as simple as it looks. The way you did that, you create a perfect grid but you have no control on the texture of each cell. On my side I have to manipulate the cells (the number of course, but also the color).

If I have missed out something, do not hesitate to let me know because maybe you were right.

I’ve seen a software where we display the data how I want to :

If we take a closer look we can see what I meant by “updating the colors of each cell” :

It doesn’t look like we associate a cell to a pixel due to the distorsion the cell as we said, and we don’t create circles with straight traits.
As a beginner, I would say its a lot of small quad generated. On each quad we associate the right color received and that’s make our image.

What do you think ? Should we go back to the first version I did when I created this post ?

I think if you need this much control over each cell and that’s really important, perhaps consider CPU parralization rather than moving to the GPU because it seems that there is a lot of logic involved in this, this is relatively simple in c++17 and above, there is a function called std::for_each() which is essentially a for loop but it can run in parralell. considedr the following example.

 for (int azimut = 0; azimut < AZIMUTS; azimut++)
    { //For each azimut (angle) of my circle
        double theta = (azimut*2*M_PI)/AZIMUTS; //Conversion azimut to radians
        for(int cell = 0; cell<CELLS_PER_AZIMUTS; ++cell)
        { //for each cell of each azimut
            double D = CELL_DUR * (START_RG+cell) * c/2.0; //get the distance between the center and the cell
            double x = D*cos(theta); //get the x
            double y = D*sin(theta); //get the y
            radarData[azimut][cell] = QVector3D(x,y,-2); 
        }
    }

instead of a for loop, i would cache the angles into a vector called angles then you can do the following.

std::for_each(std::execution::par_unseq, 
CachedAngles.begin(), CachedAngles.end(), [&radarData](int Azimut)
{
     double theta = (azimut*2*M_PI)/AZIMUTS; //Conversion azimut to radians
        for(int cell = 0; cell<CELLS_PER_AZIMUTS; ++cell)
        { 
            double D = CELL_DUR * (START_RG+cell) * c/2.0; center and the cell
            double x = D*cos(theta); //get the x
            double y = D*sin(theta); //get the y
            radarData[azimut][cell] = QVector3D(x,y,-2); 
        }
})

since there isn’t a risk of data races as you are just accessing each azimut which is seperate from each other we do not need to include things such a mutex or lock_guard.

you can pretty much do this with most for loops as it will launch threads then stop them so any where there may be a lot of iteration consider using one of these to speed up the work.

As a beginner, I would say its a lot of small quad generated. On each quad we associate the right color received and that’s make our image. What do you think ? Should we go back to the first version I did when I created this post ?

this is a good idea however make sure that you batch all your quads together then issue one draw call for them. So basically accumulate all the quad data then draw instead of collecting data drawing collecting drawing you get the gist, if you do this and combine with CPU parralization I am sure that will increase your performance a lot.

However I would consider if there is a “need” to have all these cells and update each individual color. Rather than drawing what you need directly onto that then its simply math that will tell you where to plot the object you are trying to draw onto the radar. If this is necessary for what you are doing then i believe the solution you provided is fine just make sure you are batching and utilizing CPU multi processing.

Here we are, finally as you said I batch all the quads together (each triangle is stored in the right order) then just one draw to call. Concerning my VBO_COLORS which has all the colors in the right order too, I simply have to pass them.

However I have another issue (it never stops, every day a new step). I cannot see the results due to the artefacts I got :


I see two possibilites :

  1. Moire effects => Turn on anti-aliasing.
  2. Overlapping triangles => Maybe switch to a texture (I don’t know)

To avoid the Moiré effects, we can enable the MULTI_SAMPLING. Sadly, whatever samples I want to set, my context doesn’t update its value even if the computer I’m using can hold 16 samples, but It doesn’t accept any number.
Here’s how to do it :

QSurfaceFormat format;
format.setSamples(4);    // Number of samples used for multisampling
setFormat(format);       // We associate the format to the context

glEnable(GL_MULTISAMPLING); => Added in initializeGL()

For those wondering how to find out how many samples you can handle, simply add this to your initializeGL():

GLint maxSamples;
glGetIntegerv(GL_MAX_SAMPLES,&maxSamples);
qDebug()<<"Max Samples:"<<maxSamples;

For now, here’s what I’m going to try :

  1. Check the coordinates (to make sure about the overlapping)
  2. Set a texture

If you have any idea, I’d love to hear it.
Maybe I will open another topic for this display issue
Thanks again

I agree there could be some overlapping triangles. But it could also be a precision issue, what you could try do is set up a texture/render buffer and use 32 bit components. Also turn on multisampling as well then render to your framebuffer instead of the default. Then you can just blit the custom framebuffer to the default framebuffer or render the texture to a quad. This should address these issues if precision/multisampling is required.

You could also add the check for coordinates just to be sure.

However I have another issue (it never stops, every day a new step)

Thats programming haha. I am experience issues myself as well every day. Its normal and part of the process, I am not the most experienced programmer but as I do more/debug more I found that what would have taken me weeks to debug only takes a couple hours (which is still horrible but better than before).