Wireframe flicker when using glPolygonMode

I’ve been getting this strange flickering when rendering my scene in wireframe mode. I don’t always get the flickering, sometimes objects just vanish in wireframe mode.

Video here of the problem : https://youtu.be/MXcX45gqVe8

In the video I’m just toggling enable_wireframe from false to true at the start of the video. Maybe a driver issue? Or an OpenGL setup issue? I am stumped!

glPolygonMode(GL_FRONT_AND_BACK, camera->enable_wireframe == true ? GL_LINE : GL_FILL);

Then I draw my scene like this:

glBindVertexArray(mesh->vao);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mesh->ebo);

glMultiDrawElementsIndirect(GL_TRIANGLES, GL_UNSIGNED_INT, (const void*)(mesh->id * sizeof(DrawBufferCommand)), 1, 0);

I can show more code if needed.

Hmm. Are you using different shaders that don’t do texture lookups/etc. and potentially set the fragment color+alpha to something transparent? If not, try that. Are you using an opaque render state or are you leaving alpha test and blend enabled? Try disabling those.

Also, would make sure you’re clearing the entire COLOR and DEPTH buffers of your FB/FBO (without limitation; e.g. scissor) before rendering each frame.

I tried everything you said but it made no difference. This is what I’m enabling at setup time. I got rid of everything except glViewport and it was still happening.

glViewport(0, 0, width, height);
glEnable(GL_DEPTH_TEST);
glFrontFace(GL_CW);
glDepthFunc(GL_LESS);

glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

I tried this simple frag shader, still didn’t help.

#version 450
#extension GL_ARB_bindless_texture : enable

out vec4 FragColor;


void main() {
    FragColor = vec4(1.0f);
}

Could I be setting up the context wrong?

Ok. I confess to have seen the problem you describe, but not cared enough about it to go digging for the state change(s) needed to get rid of it. :slight_smile: So you’re blazing fresh ground!

For your wireframe rendering, you might also try:

glDisable( GL_BLEND );
glDisable( GL_ALPHA_TEST );
glDisable( GL_DEPTH_TEST );
glDisable( GL_CULL_FACE );
glDisable( GL_SAMPLE_ALPHA_TO_COVERAGE );
glDisable( GL_SAMPLE_ALPHA_TO_ONE );
glDisable( GL_MULTISAMPLE );

Also for testing purposes, I’d just totally kneecap any potential for MSAA or SSAA anything from happening here. That is, create 1X (single-sample) FB attachments. No MSAA textures or renderbuffers.

It’ll be interesting to see what you find is needed to nuke the translucency and obtain opaque outlines for your draw call meshes. If you find something that works for you, please follow-up with that here. I’m interested!

This may go without saying, but you’re not using the fixed-function pipeline for shading anything in your frames, are you? That’s a case where your test frag shader may be completely academic (…or at least, not used for everything).

If you are, things like glDisable( GL_LIGHTING ) come into play, along with your active material, ColorMaterial mode, vertex colors, etc. All the old “fun stuff” :wink:

Thankyou, I did try all that. I did this to ensure everything was being disabled at the correct time before drawing.

glPolygonMode(GL_FRONT_AND_BACK, camera->enable_wireframe == true ? GL_LINE : GL_FILL);
if (camera->enable_wireframe == true) {
glDisable( GL_BLEND );
//glDisable( GL_ALPHA_TEST );  I'm using 4.5.  This threw and invalid enum error
glDisable( GL_DEPTH_TEST );
glDisable( GL_CULL_FACE );
glDisable( GL_SAMPLE_ALPHA_TO_COVERAGE );
glDisable( GL_SAMPLE_ALPHA_TO_ONE );
glDisable( GL_MULTISAMPLE );
}

I tried a few other things that seemed like it might be relevant from here. But with no success.

This may go without saying, but you’re not using the fixed-function pipeline for shading anything in your frames, are you?

Nah, I’m not doing anything like that. :slight_smile:

Also for testing purposes, I’d just totally kneecap any potential for MSAA or SSAA anything from happening here. That is, create 1X (single-sample) FB attachments. No MSAA textures or renderbuffers.

Yeah my FB is just a single sample at the moment. I’m scratching my head as to what to try next. It’s odd that I have no issues using GL_FILL, I would have thought that if there were framebuffer issues and/or other buffer issues it would be seen across both polygon modes.

Yes, that is interesting.

I thought I knew what you were seeing when you said “sometimes objects just vanish in wireframe mode”. But I just played your video, and those popping artifacts certainly suggest that (in wireframe mode):

  • Different content is being rendered in some frames vs. others,
  • The content is being rendered with different GL state active, and/or
  • Maybe there’s a driver bug at work here (as you said) … possibly related to wireframe + MDI.

If you’re not ready to shelve this yet, a few questions:

  • I see you’re using glMultiDrawElementsIndirect() (MDI). Do you see this wireframe strangeness if you switch your dispatch to basic glDrawElements()?
  • I see that your MDI draw call has drawCount == 1. Is the single subdraw an instanced draw call? If so, do you get better wireframe rendering with glDrawElementsInstanced()?
  • Which GPU and driver is this on?
  • If not NVIDIA, can you try it on an NVIDIA GPU with NVIDIA GL drivers?

Thinking about the “cube popping” with wireframe that you’re seeing, plus the fact that you are dispatching using MDI, … I wonder if in wireframe your driver isn’t dispatching (presumably) all of the instances in the single subdraw you’re passing to glMultiDrawElementsIndirect().

Another possibility that occurs to me: Could it be that your shader isn’t correctly indexing into, looking up, and applying the per-instance uniform state (e.g. MODELING transform) for each instance properly? Maybe sometimes they’re computing the wrong offset and rendering a bunch of cubes on top of each other. Maybe sometimes the correct buffer object containing those uniforms isn’t being bound properly … or bound with the correct starting offset.

You can debug problems like this by capturing a buggy frame using a tool like Nsight Graphics and inspecting the details.

If you’re running AMD drivers…

Admittedly, this is a post from 10 years ago. But it sounds somewhat like what you’re seeing:

I’m using a Nvidia GTX 1050 in my laptop. I thought I may have had a corrupt driver as my last update didn’t work. But having cleaned my C drive and updating to the latest driver (576.40) the issue is still there.

I’ve got a 980 Ti in my desktop machine so I’m going to try it there very soon.

All the cubes are unique geometry (no instancing) If I use instancing there is no apparent issue, I think because its all in one draw call. But the issue will be there in other geometry I add. It seems the more unique meshes I have the more evident the problem.

  • I see that your MDI draw call has drawCount == 1. Is the single subdraw an instanced draw call? If so, do you get better wireframe rendering with glDrawElementsInstanced()?

drawCount is 1 (for now) as I’m drawing each object separately for the time being. I’m not positive if I’m using it right… but I’m going to try glDrawElements() and see what I get.

I did think about that. But I don’t see how the indexing could be different between GL_GILL and GL_LINE states. The shaders do work when using GL_FILL, and sometimes (not sure if the video captured it or not) the cubes will flicker a lot, then suddenly all pop in and look perfect. Then the occasional flicker, then suddenly there all gone again flickering on and off. Their transforms do look correct to me.

Going to try this now. I’ll report back when I have some more info! Thanks for you help.

Ok. So each cube = one glMultiDrawElementsIndirect() call. I guess then it just boils down to the contents of the GL_DRAW_INDIRECT_BUFFER at the time of each draw call. And I’m at a loss for why that would be different between wireframe Y/N passes.

Strange.

Sure thing! Interested to see what you find.

I finally got my engine working on my desktop with a GTX 980 Ti. I’m happy to say that the issue is not present on that machine! Now, does that mean it is defiantly a driver issue with my GTX 1050 in my laptop or do you think other things could be at play?

I tend to think that if it is an issue with my buffers / OpenGL setup that it would be present on both machines. But I could be wrong…

Interestingly, simply launching the project with Nsight Graphics makes the problem go away.

Nsight Graphics made it go away because it was forcing the app to actually use my GTX 1050. By default, windows was making my app use Intel’s Dedicated Graphics, so there is a bug in their drivers with wireframe rendering. I should probably report it… one day.

Problem solved here.

Thanks for your help @Dark_Photon ! :grin: