Programmatically controlling level of AA

I am using a NVIDIA graphics card and I can see that using the driver control panel I can set the level of antialising to anything up to 32X. Can this be controlled in my program itself? That is, is there a way to programmatically set this level of antialiasing?

I’m no expert here, but yes, for the default framebuffer (window) there are a couple I know of. You can read about all but one of these in NVidia’s driver README – search down for GL_FSAA_MODE.

One is (in Linux):

putenv( “__GL_FSAA_MODE=##” );

before you init the GL context (either set it in the app or in the applications environment before you run it), where ## is an AA mode given by “nvidia-settings --query=fsaa --verbose”. In combination, you may or or may not also need to allocate your window in a multisample visual/FBConfig and enable multisample rasterization (GL_MULTISAMPLE).

Or, you can also just call nvidia-settings to force a specific AA mode:

nvidia-settings --assign FSAA=##

The NVidia README says this is equivalent to setting the env var. Be aware that the FSAA modes change per card.

You can also fall back and specify the level via the FBConfig, but this doesn’t afford you as much flexibility. For instance, I don’t think you can get to the CSAA/SSAA modes this way, and I’m unclear whether GLX_SAMPLES in the FBConfig was ever extended to support more than 4 (you don’t see more than 4 in glxinfo output anyway, even though MSAA modes > 4 samples/pixel are available; up to 32x MSAA on the latest HW).

There’s also a way to get at these FSAA modes via the NV-CONTROL X extension (google NV_CTRL_FSAA_MODE for source code), but I don’t know much about it other than NVidia offers a free libXNVCtrl library to make using this extension easy.

Now all that is for the default framebuffer (window) AA. It’s specified differently for FBOs. See ARB_framebuffer_multisample, NV_framebuffer_multisample_coverage, and ARB_texture_multisample for details there. For instance, with the latter you can use MAX_COLOR_TEXTURE_SAMPLES and MAX_DEPTH_TEXTURE_SAMPLES to query how high an MSAA you can allocate on a particular GPU (e.g. 16x on GTX285, 32x on GTX480). Then you can target these with an FBO. You declare how many samples you want when you allocate the texture via TexImage[23]DMultisample. Those let you get at MSAA and CSAA with FBO. And I believe there’s an OpenGL extension that’ll let you run the frag shader at sample rate instead of pixel (ARB_sample_shading), which I think opens up SSAA.

Hope that made some sense. I too wish this were more canonical, but AA setup is one of those areas vendors have been able to vary and differentiate themselves.

Thanks for the comprehensive answer!

What I am hoping for is some platform and GPU vendor independent way of doing this. From what you have said it appears that using FBOs would be the only way of achieving this. Is that correct?

That is, is there a way to programmatically set this level of antialiasing?

What exactly do you mean by this? Are you trying to actually set that value in NVIDIA’s control panel application, or do you simply want to set the antialiasing for your program?

If it’s the latter, you have two options. You can use WGL/GLX_ARB_multisample when picking your pixel format. Or you can use a multisample FBO.

The FBO method is the only way to ensure that you get exactly what you want, because that control panel can override your pixel format preferences.

It’s the latter. I am trying to achieve a certain level of antialiasing in my program without having to rely on the end user setting it in the NVIDIA Control Panel (or the equivalent for other GPU manufacturers).

OK, so with multisample FBOs, I would basically render everything to the FBO and then blast it on to the screen as required correct?

There are several ways to achieve this.

The easiest way is to chose a pixel format that supports AA you prefer when create your GL rendering context. Use attribute WGL_SAMPLES_ARB to set AA factor. For example 32 for AAx32. And pass the attributes to appropriate function, wglChoosePixelFormatARB() on Windows. You don’t need FBO for this. But if the function does not succeed, thy again with lower AA factor.

The other way is specific to NVIDIA and assumes using NVAPI to set whatever you want. You can create profiles and do everything you can in NV Control Panel (and even more). Be aware that some settings are not safe if they are accessed simultaneously from several applications.

But WGL is Windows specific I believe. I think I will use FBOs because I need a platform independent solution.

Yes, I think so.

For window AA config, the most cross-vendor method is to use the canonical “select format” routine in your GL windowing API (glX on UNIX, wgl on Windows, agl on Apple, etc.) – e.g. glXChooseFBConfig, wglChoosePixelFormat, etc. These are cross-vendor but GL windowing API specific. (This is one area the future “Desktop EGL” can help us out.)

Whereas the FBO interfaces is cross-vendor and GL windowing API independent.

But again like the FBO interface, don’t think you can get to all vendor-offered modes using the “select format” GL windowing API technique.

But WGL is Windows specific I believe.

Yes, but this is OpenGL initialization code, which is always platform specific. You’re either using (directly or indirectly) WGL or GLX to set up your window. Just modify this code so that you can use the appropriate multisample extension.

The downside of using a pure FBO solution is that you prevent the driver from allowing the user to set a particular antialiasing outside of your specific application’s settings. That is, they can’t set a single global setting for antialiasing that affects all programs.

Of course, the upside of using a pure FBO solution is that you prevent the driver from allowing the user to set a particular antialiasing outside of your specific application’s settings. It all depends on how convenient you want your application to be.

Well, that and whether you’re doing something in your application that requires the FBO version.