I need to render a large number of point cloud points, maybe up to 300 million. My concern is blowing out the memory buffer on the graphics card.
My question is that I’m trying to understand how OpenGL performs the rendering. Does it have to store all of the points internally before it can render them, or does it render on-the-fly? If it renders on-the-fly, then the memory problem is trivial since it only has to store the number of points equal to the screen resolution. In this case there really would be no limit on the number of points.
But if it has to store them all internally before rendering, then I could have a problem. Especially since from what I can tell, there doesn’t seem to be any sort command like “How much memory is available?”, so I wouldn’t even have an idea what it’s limits are.