Core Profile Performance Advantage

Is there a performance advantage from using a core OGL profile, i.e. setting WGL_CONTEXT_CORE_PROFILE_BIT_ARB?

None currently according to Nvidia, never heard of any difference yet.
Feel free to benchmark and post results :slight_smile:

There are is one thing you gain in doing a core profile (not performance) but compatibility with other (i.e. not desktop) that do not have GL on them yet… the EGL specification recommends for implementors to do GL3.2 core profile when bringing GL (not GLES1/2) to a new platform.

As of now, there are no other platforms aside from desktop that do GL3.x, but at the beginning of the year Imaginations Technologies announced that they (or rather some was for them) creating GL3.2 drivers for their (I think) PowerVR 545 GPU.

Actually at least for nvidia there kind of is. They actually suggest using the compatibility profile. They say that in the drivers when you use the core profile that it does checks to see if it is a core profile function.

In the end even though they say that I don’t really think it would cause any real performance issues at all and the extent of it was never really tested by anyone.

So a couple of previous posters have pointed out that NVIDIA says that a core profile might be slower. Here’s a link for people who want to check that. (see p. 97).

Edit: Does anybody know if AMD has said anything about this?

Thanks Foobar. That’s a helpful link. I know that Mark Kilgard has been critcial of deprecating features.