I’ve stumpled upon an issue while porting ancient OpenGL code. I used that code years ago to display 3D models, using display lists and fixed pipeline; the purpose was to display these models in a shaded mode and as “white” model with a wireframe overlay. This worked perfectly fine with the models i had back then.
Now, I’ve ported most of the code for modern OpenGL Versions (display lists to VBOs, shaders instead of fixed pipeline), the respective functions, i.e. displaying the model with light and displaying a wireframe overlay, still basically working the same way (using polygon offset for the wireframe overlay). This time I’ve stumbled upon strange artifacts:
I’ve seen the same problem before when I wrote the original OpenGL 1.1 code, back then it was a depth buffer issue. So I tried increasing the depth buffer size again in the new version, only to realize it’s already 24 bit. My second thought was about the model coordinate range: While models with coordinates ranging from -10 to 10 worked fine, the issues appeared on models with range from 7000 to -7000. I divided the bigger range models through a value close to their maximum and it worked:
I’m not quite sure WHY this works or what caused the artifacts. My best guest would be something related to the float precision, but I don’t know.
My question is: Has anybody experienced similiar problems? If so, what might cause these artifacts?