What is Best Approach to Downsampling the Data in Buffers Before Drawing Them?

Hi everyone,
I have a VBO containing 512 * 8192 points which are to be displayed using DrawArrays as a LineStrip (basically, a visual plot for a huge dataset). Right now, I’ve got minor problems with aliasing happening around jagged and sharp edges of the plot.
Something like this:http://docs.enthought.com/chaco/_images/line_plot.png

In order to include a 2D history, I tried to keep 200 of the above plots during a time range in a Framebuffer Object and Blit it to the main framebuffer for viewing. Since this plot is now moving along the time, the Aliasing effect becomes even worse and results in ugly distortions in the overall view.
Something like this: http://upload.wikimedia.org/wikipedia/commons/6/6c/Dolphin1.jpg

So my questions are:

[ol]
[li]What is the best way to tell OpenGL to downsample data in VBO before drawing it? (and I mean linear or even cubic interpolations not the simple nearest sampling method)
[/li][li]What is the best way to downsample a FBO before showing it?
[/li][li]Since we have a limitation of 8192*8192 for texture sizes, if I am to use a texture, how could I map such a large space into a single texture?
[/li][/ol]

Thanks in advance :slight_smile:

What is the best way to tell OpenGL to downsample data in VBO before drawing it? (and I mean linear or even cubic interpolations not the simple nearest sampling method)

I’m not sure what you mean by “data in VBO” here. As I understand your problem, the data in your buffer objects are the vertex data for the plot.

Your vertex data is a bunch of positions and other vertex attributes. They’re essential data for a vector graphics process. You can’t “downsample” vector graphics because vector graphics haven’t been sampled yet. “Sampling” is how you transform vector graphics into a pixel image. In short: sampling is rendering; until you render it, there aren’t any samples yet.

What is the best way to downsample a FBO before showing it?

Blitting it.

It’s not the “best”; you could do better from an image quality standpoint by using a fragment shader for downsampling. Blits only support linear operations, while FS’s could do whatever (bicubic, gaussian, etc).

Since we have a limitation of 8192*8192 for texture sizes, if I am to use a texture, how could I map such a large space into a single texture?

By rendering it. Your rendering process is what decides how primitives (points, lines, etc) in your buffer object map to pixels in an image. If you’re using a vertex shader, that’s exactly what it does. If you’re using fixed-function, that’s the responsibility of the various matrices you set up.

Rather than keeping around a bunch of images to go backwards and forwards in time, you should just render different parts of the dataset based on the current time. Also, you’ll be able to use multisample antialiasing (or other pseudo-AA techniques).

Yeah, you’re right. But there are about a thousand vertexes rendered on the same pixel (I can tell this by defining the transparency per vertex to 0.001 and see I have a pixel with transparency of 1.

Unfortunately I’m still using a fixed pipeline. But I didn’t get good results with blitting (which has only Nearest and Linear filtering). The edges are still extremely jagged and aliasing is apparent.

Plot data from dataset is calculated and generated on the fly and with a high update rate. I can only keep a small number of them in a circular queue as a buffer.
I’m going to try rendering to texture for my purpose. Although I’m still wondering if there’s a better alternative.
Many thanks Alfonse :slight_smile:

I’m not sure how you defined the alpha “per vertex” (was in an attribute? If so, what format did you use to upload it? Things like that), but unless you did everything correctly, you may not have proven what you think. Especially if your framebuffer didn’t have alpha.

Is this result any different from what you see normally?

[QUOTE=sinaxp19;1266619]Plot data from dataset is calculated and generated on the fly and with a high update rate. I can only keep a small number of them in a circular queue as a buffer.
I’m going to try rendering to texture for my purpose.[/quote]

And you believe that you’ll be able to keep more data around in texture form? 8Kx8K textures aren’t exactly small in terms of memory. That 256MB for a 32bpp texture format. Even on an 8GB card, that’s only 32 images. And that’s if you assume that the framebuffer and other buffer objects take up no space.

I used vertex4(x, y, z, 1), If this is correct.

I didn’t have experience with previous codes on visualizing such a large amount of data. I wrote my program in MATLAB initially which seemed to be handling such problems correctly (if MATLAB uses OpenGL in it lower layers maybe).

[QUOTE=Alfonse Reinheart;1266620]
And you believe that you’ll be able to keep more data around in texture form? [/QUOTE]
No I was hoping to render into a maximum sized texture via setting up framebuffer and rescale it for viewing in output. That way I could’ve maybe compensated for such problems to a degree.

There is no “best” way. There are enough different algorithms to fill several books, and they all have strengths and weaknesses.

If I understand your problem correctly, there are two basic approaches, each of which will produce different results:

[ol]
[li] Downsample the data before plotting using some form of low-pass filter (box, triangle, cosine, Gauss, Lanczos, etc).
[/li][li] Plot the graph at a sufficiently high resolution then downsample the rendered image.
[/li][/ol]
If you’re downsampling, interpolation isn’t of much use to you (it only uses the two values closest to each sample point, whereas you need to use at least all of the values which lie within a pixel and possibly some adjacent values as well).

The first option can be performed in the application, or it can be offloaded to the GPU via a compute shader.

The second option realistically needs to be performed on the GPU (as the rendered plot will already be in video memory), but can use any of the algorithms available for the first (you’ll just be filtering intensity values rather than Y coordinates). If you don’t care about the choice of filter, you can just use glGenerateMipmap() on the texture.

If you really are mapping hundreds or thousands of data points to a single pixel, is downsampling to an average value really what you want? Consider alternate visualizations that retain the distribution of the original data.

Thanks GClements, I’ve decided to go with rendering into a maximum sized texture via setting up Framebuffer and then map a down-scaled version of it on a quad.

[QUOTE=arekkusu;1266666]If you really are mapping hundreds or thousands of data points to a single pixel, is downsampling to an average value really what you want? [/QUOTE] I need to retain the overall shape of the plot and (specially min and max) and on a large scale, simple interpolations wouldn’t remove them.

That depends on your data. If your data is [min, max, min, max, min, max…] the average is a flat line, all signal is lost.