I have a program to display point cloud data and am researching the idea of using VBOs to speed up the display. The catch is that I could be displaying 10-30 million points. Each point requires 28 bytes for the point and color data. That means that 10 million points would require 280 meg of video RAM; 20 million points would require 560 meg of RAM, and so on.
I read an older question found here:
that says that the video card breaks up the memory into chunks and I may only have 256 meg available. However, this posting is also several years old.
So my question is does this limit still apply? And is there a way to find out how much memory is available and how big the chunks are? Am I am trying to use the video card beyond what it was intended?
I already am using Vector Arrays. One way to get around this problem is to use decimation instead of VBO’s and only display part of the data while rotating. Would this be a better approach?
Thank you for any insites.