Does anyone know of a method that guarantees that points/vertices will display in front of filled polys?
In my modelling software I’m trying to draw gl_points at each of my model’s verices, but no matter which method I use they always appear half behind the filled polys.
Here’s an example:
You can see the points at the back are sitting behind the object.
I’ve tried various ways to overcome this, most notably by setting glPolygonOffset and enabling GL_POLYGON_OFFSET_POINT,GL_POLYGON_OFFSET_FILL,GL_POLYGON_OFFSET_LINE, etc. But none of these seem to work.
I can’t disable the depth testing as it means my ‘back facing’ vertices will also show through.
Any clues how I can get round this?!
As for polygon offset not having an effect on the points, that’s because polygon offset has only an effect on polygonal OpenGL primitives, not lines or points, and the _POINT, _LINE, and _FILL suffix is related to the glPolygonMode setting.
Mixing polygons and real line or point primitives will never perfectly work with polygon offset.
You could offset the shaded polygons farer away and leave the points where they are or render the whole thign again with polygon mode GL_POINT and move those polygons to the front, but this doesn’t really solve the issue because big points render their fragments with flat screen z-values and the depth offet required would be highly dependent on the polygon to viewer angle and point size.
An occlusion query is what you’re looking for but you would need separate query for every point in your scene and that could get ugly.
So, perhaps image based approach?
After rendering your scene draw your pixels with size = 1 and small polygon offset - use some unique color (or alpha value). Now copy scene to texture.
Disable depth test and draw all pixels again - this time larger. Use shader:
-vertex shader uses ftransform and converts screen coordinates to texture coordinates
-fragment shader tests that texture under these coordinates and if unique color is found it draws a pixel, otherwise it discards pixel.
You could try occlusion query if you want.
The basic Idea is the same - to draw small “test” pixels with depth detsing enabled and then draw large pixels with depth testing disabled if a pixel passed the test.
Yeah, as Relic said - you should offset everything but lines and points.
Thanks for the suggestions. I’ve already tried offsetting the vertices so they’re slightly closer and it does work, but only when you’re viewing from a set distance.
I think maybe the best way is to simply use a ‘fudge’ factor which changes the offset as your distance changes, keeping everything in line. Any other method would simply be too costly in processor time.