Ugly Core Profile Creation

AFAIK drivers have to support core only, compatibility is optional.

I think the whole point of the core profile wasn’t to make it easier for OpenGL users but for OpenGL implementors. As many already pointed out, you can use all the core functions in a compatibility context too. If you don’t use e.g. stipple patters in your application, the fact that there exist stipple functions in a compatibility context doesn’t mean that you suddenly have to change your code to make it work.

Performance wise I wouldn’t expect a major difference between core and compatibility profiles even if they were implemented as separate drivers. After all memory bandwidth, shader execution speed etc. will be the same. Granted the core only driver may have some opportunities to optimize object creation etc., but those costs are usually one-time costs that (for most applications) don’t affect the rendering loop at all.

The real benefit of the core profile is to make it easier to implement a GL3+ driver, without having to care for all the legacy quirks in OpenGL. That also explains why those who already invested into developing drivers which can support all the legacy code have no interest in pushing users to the core profile.

That’s correct. On Mac OSX, there is no compatibility profile at all.

I doubt that anyone implements it separately in their drivers. There was a thread that you lose performance when using the core profile on nvidia.

I think the whole point of the core profile wasn’t to make it easier for OpenGL users but for OpenGL implementors.

Very true and accurate.

In my opinion, driver implementers should not be involved in the specification, or even users. It should only be designed and maintained by Academia, computer graphics professors, and some well recognized names in the industry.

I guess menzel needs to get another round …

In my opinion, driver implementers should not be involved in the specification, or even users. It should only be designed and maintained by Academia, computer graphics professors, and some well recognized names in the industry.

If you’d let the professors I got to know specifiy OpenGL then its good bye OpenGL. Leaving the implementors out of the picture is simply not a good idea. At all. If some professor who has never written software that surpasses simple examples used in lectures, and I bet there’s a lot of those out there, then how are they supposed to anticipate if a specification can actually be implemented? IMHO, most people in academia (unless they have experience as a GPU driver developer) simply don’t have the experience you need. I’m not saying that there aren’t capable people in academia but specifications on that scale need to be done by people who have experience with software engineering on that scale. Writing a little shadow mapping demo or something to show stupid students how it’s done in principle isn’t going to cut it. I can say without a doubt that at my university most of us surpassed the OpenGL facilities of our professors before studying was over - I suspect this is true in many case, if not in most cases. Does that make us fit for specifying or implementing OpenGL? Hell no.

You know what people bare well recognized names in the industry? The promoting members of the ARB …

What does knowledge of GPU details have to do with API design? You are mixing two different things, drivers and APIs. When we talk about API design we talk from a higher abstract level that should serve a certain purpose regardless of hardware details…and remember not every hardware work the same. API interface = exposed functionality vs. how it works. Of course there are other aspects like what’s possible and what’s on current hardware but these are too general than being limited to only driver developers.
And remember we base the design from existing designs, otherwise how could we have “future suggestions forum” where none driver developers could suggest many useful features and doable (based on what we already know from another API on the same hardware). The point is let lets have the Academia, researchers, PhD people with a lot of experience in software engineering come up with a nice flexible design, and I’m not talking about your “professors” or mine. There are scientists in the field who can come up with the best designs ever!!! Rule of thumb, never let a hardware engineer do the software engineer’s job, though the opposite is possible.

What does knowledge of GPU details have to do with API design?

Performance. Just look at OpenGL 1.1 as an example.

How many features of 1.1 were never implemented in consumer hardware (at least, until they could be implemented via shaders internally)? You’ve got accumulation buffers and selection buffers, at the very least; any attempt to use these features was basically the kiss of death as far as performance was concerned. In the early days, falling off the “fast path” was ridiculously easy in OpenGL.

Indeed, it wasn’t until 1999 that OpenGL implementations could do T&L in hardware; until then, all that T&L was done on the CPU. Which skilled programmers could probably do just as well if not faster. From OpenGL’s initial release until the GeForce 256, vertex T&L was a liability to performance, not a benefit.

Look at how tortured a bad API design can get. The glTexEnv nonsense that was extended and extended until the combiner level where it was just a really horrible way to specify an assembly language for fragment shading. At one point in time, glTexEnv was a good idea. But it wasn’t very extensible and eventually led to horrible things.

Any hardware-based API needs input from the hardware makers themselves. You always want the lowest level API to be as close to the hardware as possible while still providing a reasonable abstraction. The people who best know how to make an abstraction are the people who make the hardware. Or at least have detailed knowledge of it.

All very nice in theory but the problem is this:

OpenGL has been there before and it didn’t work.

Separation of OpenGL specification from how hardware actually worked directly led to the API becoming more and more irrelevant and baroque, and was also a huge contributing factor to the rash of GL_VENDOR1_do_it_this_way, GL_VENDOR2_do_it_that_way and GL_VENDOR3_do_it_t’other_way extensions to meet the requirements of people who were actually developing and releasing programs that used OpenGL. At the same time as that was going on, D3D was tying itself closer and closer to how hardware worked and having OpenGL’s ass on a plate as a result.

What you’re proposing would be a return to the days when drivers would advertise GL_ARB_texture_non_power_of_two but not actually have it supported by the hardware. Yes, this really happened, and you had no way of knowing until you got that sudden crunch back to under 1 fps. You were obviously not around back then, but let’s safely assure you - nobody else wants to go back there.

So, given that it didn’t work before, given that the D3D approach is what has already been established in the field to be what actually works (and if you think GL core context creation is ugly you should see what D3D7 and earlier were like…), given all the good work done in the past few years to put OpenGL back into a position where it can at least try to be competitive again, and given that you completely fail to provide any compelling reason why an approach that didn’t work before should be any different now (and why all that good work should be undone), it has to be said that your position on this lacks any substance.

And who says that the hardware designers and implementors do the driver implementation? Do you think NVIDIA, AMD, Intel and so on do only employ hardware specialists? You need both parties.

There are scientists in the field who can come up with the best designs ever!!!

I seriously doubt that this is true. The best designs ever come from people who have years or decades of experience designing and implementing industrial strength software - not out of the ivory tower …

[QUOTE=thokra;1240405]I guess menzel needs to get another round …

[/QUOTE]

I have to stock up in popcorn…

A question from a novice who knows nothing and wants to learn from experts.

If I’m linking at run time to the OpenGL32.DLL using LoadLibrary, how can I use the wglGetProcAdress or even load other core profile functions or extensions?

First of all, why pull in the lib at runtime? Second, check this.

get it! so
hdll = LoadLibrary(“opengl32.dll”);
wglGetProccAdress = GetProcAddress(hdll, “wglGetProccAdress”);
then use wglGetProccAdress…

Customized error reporting in case OpenGL is not installed.

Or you can simply link to the export lib and use wglGetProcAddress directly.

You don’t install OpenGL; OpenGL is not software and opengl32.dll will always be present (unless you really must run on pre-OSR2 Win95 or pre-NT3.5). The real question is: does the user have an OpenGL driver for their hardware? If a driver is present then opengl32.dll just acts as a proxy for the vendor’s implementation, so options are to (1) check your pixel format flags, (2) check your GL_VENDOR string or (3) check for an absolutely ubiquitous extension that will be present on all civilized hardware released within the past 15-odd years - something like GL_ARB_multitexture should suffice. Of course, if you want to run on hardware older than that too then you’re probably going a little too far, and will have to cope with the joyous world of minidrivers (and don’t forget that some of these were game-specific and may not even have been called “opengl32.dll”), missing blend modes, software fallback left-right-and-center, and other weird behaviours from the time. Since that seems to be the OpenGL world you want to go back to, you may well welcome it! I wouldn’t. But that’s the only scenario in which OpenGL may be “not installed”.

Yup. So no body can just search for OpenGL32.dll file and hit Delete? :slight_smile:

I believe that opengl32.dll is covered by Windows System File Protection, so they’ll have to also disable that. If they go so far then you’re dealing with active user stupidity and it’s their problem, not yours.

Anyway after all this mess I’ve been through trying to find a nice way to create a core profile I came up with an idea that may work, but it worth trying. Now I fully understand that the ARB is not responsible for platform dependent stuff I thought we, OpenGL developers, who work on different platforms, could come up with a standard platform independent API for OpenGL context creation management. Then IHVs may start implementing a prototype and make it public for testing.

As a starting point, lets just forget there’s OpenGL32.DLL.

All context creation functions should be accessible through Win32 GetProcAddress and before creating a context, unlike what we have now.

I suggest creating a separate forum specialized for this and all people are welcome. We need a moderator though. Any volunteer? :slight_smile:

I thought we, OpenGL developers, who work on different platforms, could come up with a standard platform independent API for OpenGL context creation management.

Come up with an API, or write and maintain an implementation of this API? Because just saying, “here are some functions and what they should do” will do precisely nothing. And if you’re talking about an implementation, then it must run on top of the existing infrastructure (and thus is limited by that infrastructure).

Come up with an API, or write and maintain an implementation of this API?

First we create the specification for context management on desktop platforms, as EGL for embedded systems.

Next step, we get feedback from IHV driver developers.

Do necessary changes to the specification if there’s any technical problems reported by IHVs.

Then they can start implementing it.

Here’s something to start with:


// This approach will eliminate the necessity to create a temporary profile first 
// in order to access the new versions...
 
glDll = LoadLibrary("libgl.dll");

xglCreateContext = GetProcAddress(glDll, "xglCreateContext");
xglMakeContextCurrent = GetProcAddress(glDll, "xglMakeContextCurrent");
xglGetFunction = GetProcAddress(glDll, "xglGetFunction");

XGL_CONTEXT_DESC desc;
desc.profile = XGL_CONTEXT_CORE; // XGL_CONTEXT_COMPAT;
desc.hWnd = hWnd;
desc.FSAA = true;
desc... // Specify other parameters

XGL_CONTEXT ctx = xglCreateContext(&desc);

xglMakeContextCurrent(&ctx);

glXXXXX = xglGetFunction("...");

// Or even better, no need to have a context in order to link to all GL functions
// In this case the "libGL" file should contain all the functions in the supported version.