Attempting to obtain an OpenGL context <= 3.0

I understand it’s heavily discouraged to use an older OpenGL and that I should target the latest OpenGL. I currently am capable of using OpenGL 4.6, and I own an RTX card.

But this is for my personal enjoyment and research.

What I’m attempting to do is to see if it’s possible on an RTX Nvidia card, with the latest drivers and all, to try and use an older OpenGL version. Currently, the lowest thus far I can manage without using 3rd party libraries is initializing OpenGL 3.1 on the RTX card.

With the library, I can go down to OpenGL 1.1, but I can’t go down to OpenGL 1.0. But somehow, I cannot get the library to load OpenGL 1.2 ~ OpenGL 3.0. Either it’s my code initialization very rusty when it comes to legacy OpenGL, or that RTX cards no longer support older OpenGL versions.

I am working on Windows 10 1809, because I believed with the powerful feature of backwards compatibility on Windows OS, I can be able to load up an older OpenGL context.

I’m not sure what else do I need, to get to OpenGl > 1.1 and OpenGL <= 3.0.

I can’t go down to OpenGL 1.0.

GL 1.0 is not really a thing that exists at this point. Yes, there is in fact a specification for it, but Windows always provides/requires at least 1.1. So there’s really no point.

But somehow, I cannot get the library to load OpenGL 1.2 ~ OpenGL 3.0.

It would help if you tell us what that library was. It would also help if you explained how you’re initializing OpenGL, what kind of profile you’re asking for, etc.

The library that I’m using is the library written by Microsoft, which only loads OpenGL 1.1.

From what I can tell, the main parts of the “so-called library” uses the GLU header. Both the GL and GLU header files are provided in the Windows SDK. Technically, this isn’t a library.

As for me, I don’t use the library. Instead I used a sample code found online here:

And mixed it up with GLFW’s source code starting with this function here: (Only the first half part of glfwInitWGL() is used)

And then I created the custom-code from both of these sources together.

And this is where I am. My custom-code and the code from GitHub Gist were tested and used, to figure out what OpenGL versions I can use, which are 1.1, 3.1, and greater than 3.1.

As for profile. I’m thinking that Core Profile is the profile where it’s strictly only for that specific OpenGL version, right? And Compatibility Profile refers to forward-compatibility starting from a specific OpenGL version, and up, right?

So, I’m currently looking out for Core Profiles for OpenGL 1.2 up to 3.0, then Compatibility Profiles for OpenGl 3.1 and up. The Compatibility Profile is easily achieved, so I have no worries nor concerns about OpenGL 3.1 and up.

Well, yes. If you want to actually use later parts of OpenGL on Windows, you have to do things to access it. If you’re not loading those pointers yourself, and you’re not using a loading library to load them for you, I’m curious as to how you got your 3.1+ versions working.

No, that’s not how it works.

I mentioned earlier how I used a GitHub Gist source code and a small part of GLFW source code to create my custom source code. Those two source codes uses the mechanisms of loading pointers. But they are not truly a “library”, like GLEW, GLFW, and GLUT. That “library” refers to one of those 3.

My code still loads the pointers. It’s just me not using libraries like GLEW, GLFW, and GLUT.

But if the “library” refers to the “opengl32.lib” and “glu32.lib”, those libraries are used, for sure.

Right now, I’m trying to figure out how to get access to the OpenGL 3.0 and below. I’m sure these are all just Core profiles, since “profiles” are introduced in OpenGL 3.2 and up.

No, that’s not how it works.

Oh, so “Core” and “Compatibility” both can refers to the OpenGL Core context and profile, and OpenGL Compatibility context and profile, right? It’s mentioned in the wiki:

You can’t force the driver to provide a specific version. Unless you use relatively-recent functions to create a context using 3.1 or 3.2+ core profile, modern hardware will typically give you 4.x compatibility profile, as that is compatible with every version from 1.0 to 3.0 and with the compatibility profile of every version since 3.2. 3.1 is an oddity as it removed the “deprecated” functionality altogether, a decision which was effectively reverted with the introduction of profiles in 3.2.

If you have code written for OpenGL 1.x or 2.x, the only thing which shouldn’t “work” with 4.6 compatibility profile is that operations which would generate an error in earlier versions may succeed in more recent versions.

[QUOTE=GClements;398249]3.1 is an oddity as it removed the “deprecated” functionality altogether, a decision which was effectively reverted with the introduction of profiles in 3.2.

I see.

I am pretty sure I didn’t use any relatively-recent functions. If I recall, I’m still using the wglCreateContextAttribsARB to generate an OpenGL 3.1 context. I just supply it with major of 3, and a minor of 1. It’s only when I supply the versions 3 and 0, or 2 and 1, do I then get an OpenGL 4.6 context.

Thanks. Now I need to figure out what operations would definitely generate an error in earlier version of OpenGL. If you know any, please let me know.

That’s within my definition of “relatively recent”. The “not recent” function is wglCreateContext(), which doesn’t give you any control over the version or profile. So that gives you something which is compatible with OpenGL 1.1, which usually means the compatibility profile of the highest supported version.

That will give you either 3.1 or 3.2+ core profile. As mentioned previously, 3.1 was an exception as it removed the deprecated functionality entirely.

Requesting any version prior to 3.1 will normally give you the compatibility profile of the highest supported version. If you ignore 3.1 and 3.2+ core profile, later versions only ever added features, they never removed anything (well, unless you count errors caused by violating restrictions which were relaxed or removed in later versions).

Creating a texture whose sizes (ignoring any border) aren’t powers of two used to generate an error (GL_INVALID_VALUE) prior to the introduction of the [var]ARB_texture_non_power_of_two[/var] extension (core since 2.0). That extension is somewhat unusual in that it doesn’t add any new functions or enumerants; it just removes a restriction on existing functions.

On the subject of extensions: in the same way that you can’t force an OpenGL version lower than that which the driver wants to provide, you can’t disable extensions either.

In short, if you want to check whether your code will work on an obsolete system, you have to actually obtain such a system to test it on.

Thank you for all of these replies. I guess that’s enough research for me.