Sorry for a stupid question, first of all. I would like to visualize a “corner-point grids” (Corner-point grid - Wikipedia) model using OpenGL only, but looks like it’s not so obvious as I thought before .
Typical model looks like this :
So, am I right in thinking, that OpenGL doesn’t provide “hidden surface removal” solutions and I have to implement all that stuff on CPU (I mean all that terible words : BSP trees, z-buffers, painter’s algoritms etc) instead of using my GPU?
Or may be someone can give me advice how to visualize such models, that would be really cool
No. OpenGL comes built-in with Z-buffer “hidden surface removal” support. To use it:
[li] Allocate a depth buffer (aka Z buffer) with your render target
[/li][li] Each frame, clear the depth buffer (i.e. glClear( GL_DEPTH_BUFFER_BIT ))
[/li][li] Enable depth testing: glEnable( GL_DEPTH_TEST ) and set the depth comparison function you want – by default it is glDepthFunc( GL_LESS ).
[/li][li] When you define your PROJECTION transform, set up your NEAR and FAR clip planes to sandwich the objects in your scene (e.g. gluPerspective or glFrustum for perspective projection, or glOrtho or gluOrtho2D for orthographic projection).
[/li][li]Now just draw stuff, and only the closest at each pixel (or sample) will be kept.
For high quality, allocate a render target with multisampling (MSAA) or supersampling (SSAA) capability. This gives you multiple samples per pixel, which get downsampled into a nice antialiased result in the end.