openGL periodic plotting point to window or plot to window not using glutDisplayFunc(your_func)

I am wanting to plot complex data points to a window to display. I know how to do it all in one go:

void display() {
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(0.0, 0.0, 0.0);
point graph[M*N];
glBegin(GL_POINTS);
for (int i = 0; i < M*N; i++) {
graph[i].x = output[i].x;
graph[i].y = output[i].y;
glBegin(GL_POINTS);
glVertex2f(graph[i].x, graph[i].y);
}
glEnd();
glFlush();
}
glutDisplayFunc(display);

and then diplay it with glutdisplayfunc… is there a way to call this multiple times as in fill the array with vertices and write to the window and do it again and again etc as many batches of vertices as you want. I tried making a large single array of the points but its sizeof struct{float x; float y;} x^10 elements and seems to cause problems on the gpu and cpu. Is there a way to perform this operation as ive shown but periodically because in between plotting to the window I want to compute the data on the gpu. Alternatively are there other function besides glutDisplayFunc(void(*func)(void)) to display pixels in the window?

Call glutPostRedisplay from a timer or idle callback.

Thank you, I will look into doing that

   glutInit(&argc, argv);
   glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
   glutInitWindowSize(700,700);
   glutInitWindowPosition(0,0);
   glutCreateWindow("Points");
   //glutDisplayFunc(display);
   //my_init();
   //glutMainLoop();
   for (int i = 0; i < 10000; i++){
   // run the kernel
	kernel<<<blocks_per_grid, threads_per_block>>>(d_input, d_eyes, d_constants, d_map);

    // execute the fft on the kernel computed data
	checkCudaErrors(cufftExecC2C(plan, reinterpret_cast<float2*>(d_input), reinterpret_cast<float2 *>(d_input), CUFFT_FORWARD));

    // make sure all threads are done
	if (cudaDeviceSynchronize() != cudaSuccess) {
		fprintf(stderr, "Cuda error: failed to synchronize\n");
		return 0;
	}
	// copy back over to cpu mem
	printf("%i\n", i);
	checkCudaErrors(cudaMemcpy(res_output, d_input, M*N*sizeof(float2),    cudaMemcpyDeviceToHost));
	glutDisplayFunc(display);
	glutIdleFunc(redisplay);
   }
	my_init();
	glutMainLoop();

this is the for loop computing the data, cudaMemcpy(res_output, d_input, MNsizeof(float2), cudaMemcpyDeviceToHost) this function copies the data to the host cpu and display() make vertices with it to plot to the window.

void redisplay(){
	glutPostRedisplay();
}

That is how I used glutPostRedisplay() and called redisplay() in glutIdleFunc(redisplay). Its only plotting the data from the last interation not plotting for each iteration of the loop. I know displayFunc, idleFunc and postredisplay set event flags to redraw the window and what not but nothing seems to get executed until glutMainLoop() gets called and by then it seems to forget about the previously computed data and flags that have been set. Is this close to the right way of use or completely off?

There shouldn’t be any loops in main(). Create the window, register the callbacks (including the idle and display callbacks), initialise your state, then call glutMainLoop.

The idle callback should do one “chunk” of computation then call glutPostRedisplay.

Thank you! that is working it is continuously calling and displaying the newly computed data. The only thing is it seems like it clears the window and re-plots all the new points. I want to plot on top of the old points repeatedly. Is there a way to do that? Ive been trying to call glutUseLayer and alternatively glutPostOverlayRedisplay() but they dont seem to really change the functionality. My assumption is the layer is the current window being displayed maybe I have to save it or something like that?

   void compute_data() {

   kernel<<<blocks_per_grid, threads_per_block>>>(d_input, d_eyes, d_constants, d_map);

   // execute the fft on the kernel computed data
   checkCudaErrors(cufftExecC2C(plan, reinterpret_cast<float2*>(d_input), reinterpret_cast<float2 *>(d_input), CUFFT_FORWARD));

   // make sure all threads are done
   if (cudaDeviceSynchronize() != cudaSuccess) {
      fprintf(stderr, "Cuda error: failed to synchronize\n");
   }
     checkCudaErrors(cudaMemcpy(res_output, d_input, M*N*sizeof(float2),
     cudaMemcpyDeviceToHost));
     glutUseLayer(GLUT_OVERLAY);
     glutPostRedisplay();
   }

then called in the main() function
glutIdleFunc(compute_data)

display() hasn’t been changed

Draw onto a framebuffer object (FBO). Use glBlitFramebuffer to copy the contents to the window.

Both of these require OpenGL 3.x or the ARB_framebuffer_object extension. If you have CUDA, I’m assuming the card will support those.

If you’re using single-buffered rendering (GLUT_SINGLE), you could just remove the glClear call from the display callback. But if the front buffer is damaged (e.g. due to another window being rendered in front of it), you lose the window contents and would have to start over. FBOs are the only reliable (and portable) way to store the results of rendering operations indefinitely.

Thank you this helps a lot, I will work at it.

Yep it was the glClear(GL_COLOR_BUFFER_BIT) it’s now working how I was expecting pretty cool to watch the image change as values are computed. I just moved the glClear to the init function I have for the window i’m making. I didn’t realize it was doing what it was doing and clearing the window buffer. Framebuffers are cool seems like you use them if you have an image and you want to compute on the pixels and re-draw it to the window at-least what I know of them so far. Thanks again for the help.

It segmentation faults after drawing on 2147221504 pixel values maybe the buffer cant hold that many… so might have to use framebuffers

Sounds like a memory leak, i.e. allocating memory in each iteration and not releasing (all of) it.

This isn’t something that FBOS will fix. FBOs (more precisely, the renderbuffers or textures attached to them) just provide a drawing surface which is guaranteed not to be cleared unless you explicitly clear it.

For the system framebuffer (i.e. the window), the front buffer can be overwritten at any time by the windowing system (although compositing window managers make this less likely), while the back buffer becomes invalid after copying it to (or swapping it with) the front buffer.

That’s a good point ill look for spots where I’m allocating memory. Ok I won’t try using Framebuffer to fix this appreciate the warning.

It was the display function

void display() {
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(0.0, 0.0, 0.0);
point graph[M*N];
glBegin(GL_POINTS);
for (int i = 0; i < M*N; i++) {
graph[i].x = output[i].x;
graph[i].y = output[i].y;
glBegin(GL_POINTS);
glVertex2f(graph[i].x, graph[i].y);
}
glEnd();
glFlush();
}
glutDisplayFunc(display);

the second call to glBegin I honestly have no idea how that got in there but at a certain point I took it as purposefully there lol, working great now though.