Ugly Core Profile Creation

Could you please tell me why it’s there then? I’m not saying that I’ve found any benefits from using core profile, but I’m basing my argument on that fact that there’s a core profile, and hence assuming a separate render path which theoretically should be faster.

Theoretically that may be true. Using a core context does have a meaning. For instance, using legacy functions when a context is current will generate an error, so you can enforce a policy, i.e. to only use core functions. Also, you loose unnecessary buffers like the accumulation buffer and auxiliary buffers but that’s not again either because you’re not forced to use them with a compat context either. The fact seems to be that no IHV which implements the core profile does actually provide such an optimized core path. I have yet to see a performance gain on either GeForce or Radeon with a core context - so I assume it simply isn’t there. Can’t speak about Apple and Intel is … well, Intel. This is not to say that writing core conforming applications isn’t advisable - quite the contrary. Nowadays I’d always go for that. Granted, there are some extensions which give you useful tools but aren’t necessarily completely core conforming, like GL_EXT_direct_state_access.

We got some employees of both major companies here so maybe the guys can garnish this discussion with a little more technical foundation and insight.

Unless you are an ARB member you should have no prob with this.

Since this is an open discussion forum, it’s up to the reader to decide what’s relevant to them and what’s not. Some may simply ignore it but some may simply object because they feel something isn’t portrayed correctly. Personally I like to see inaccuracies being dismantled by experienced members of the community - especially my own mistakes, simple because it widen my horizon in some cases.

[…]your move beyond version 2.1 was a horrible mistake, at least the design is just incompetent.

I think very few would disagree that compared to GL 2.1, GL 4.2 is a huge step into the right direction. Legacy GL is what’s horrible. Yes, we all know the countless discussion about how the spec is imperfect and how it would be much better to do this and do that to improve it but IMHO GL4 is a much, much better API. Why do you feel that GL3+ was a horrible mistake to make?

Why do you feel that GL3+ was a horrible mistake to make?

The idea of keeping legacy stuff by means of compatibility profiles shows that the ARB was not so confident about the new API.

Unless you are an ARB member you should have no prob with this.

My problem with this is that you’re using our forum to talk to people who may well not be paying attention to it. Or to put it another way, your words will only possibly be read by the people you actually are complaining to, while they will certainly be read by the people you’re not complaining to. How does that make sense?

If you want to talk to the ARB directly, the Khronos Group’s website has contact information. In short, use the proper channels to talk to the people you want to talk to.

You say that as though it was the ARB’s idea and not NVIDIA shoving it down their throats. They publicly spoke out against deprecation and removal and basically sandbagged any effort to force people to upgrade by saying that they would support legacy stuff in perpetuity. After that, there wasn’t much choice except to create the compatibility profile, since it was going to be a de facto construct anyway.

It is not a short story that can be told in a few sentences here, in the forum. The period from the late 2006. and August 2008. was a very dramatic one considering OpenGL and it’s perspective for further development. Unlike D3D, OpenGL is a standard developed by a consortium. New hardware, a more efficient rendering and a decade old API with hundreds of functions and multiple paths to accomplish the same thing were arguments to cut down drivers and make them more efficient (“lean and mean”). But, on the other hand, there were a lot of “strong players” with products based on legacy OpenGL code and years of good reputation and tons of code that would be thrown away instantly with the radical change in the API. Forces for leaving support for the legacy API prevailed. That’s how profiles are born.

Since drivers have to support both core and compatibility profiles, there is little chance to make significant difference in the performance yet. If you are making a software that needs some functionality don’t hesitate to use compatibility profile. Your users will not be aware of it, but they will certainly notice lack of visual elements. If you need clear path and want to squeeze better performance, use core profile and follow your own way to the solution.

AFAIK drivers have to support core only, compatibility is optional.

I think the whole point of the core profile wasn’t to make it easier for OpenGL users but for OpenGL implementors. As many already pointed out, you can use all the core functions in a compatibility context too. If you don’t use e.g. stipple patters in your application, the fact that there exist stipple functions in a compatibility context doesn’t mean that you suddenly have to change your code to make it work.

Performance wise I wouldn’t expect a major difference between core and compatibility profiles even if they were implemented as separate drivers. After all memory bandwidth, shader execution speed etc. will be the same. Granted the core only driver may have some opportunities to optimize object creation etc., but those costs are usually one-time costs that (for most applications) don’t affect the rendering loop at all.

The real benefit of the core profile is to make it easier to implement a GL3+ driver, without having to care for all the legacy quirks in OpenGL. That also explains why those who already invested into developing drivers which can support all the legacy code have no interest in pushing users to the core profile.

That’s correct. On Mac OSX, there is no compatibility profile at all.

I doubt that anyone implements it separately in their drivers. There was a thread that you lose performance when using the core profile on nvidia.

I think the whole point of the core profile wasn’t to make it easier for OpenGL users but for OpenGL implementors.

Very true and accurate.

In my opinion, driver implementers should not be involved in the specification, or even users. It should only be designed and maintained by Academia, computer graphics professors, and some well recognized names in the industry.

I guess menzel needs to get another round …

In my opinion, driver implementers should not be involved in the specification, or even users. It should only be designed and maintained by Academia, computer graphics professors, and some well recognized names in the industry.

If you’d let the professors I got to know specifiy OpenGL then its good bye OpenGL. Leaving the implementors out of the picture is simply not a good idea. At all. If some professor who has never written software that surpasses simple examples used in lectures, and I bet there’s a lot of those out there, then how are they supposed to anticipate if a specification can actually be implemented? IMHO, most people in academia (unless they have experience as a GPU driver developer) simply don’t have the experience you need. I’m not saying that there aren’t capable people in academia but specifications on that scale need to be done by people who have experience with software engineering on that scale. Writing a little shadow mapping demo or something to show stupid students how it’s done in principle isn’t going to cut it. I can say without a doubt that at my university most of us surpassed the OpenGL facilities of our professors before studying was over - I suspect this is true in many case, if not in most cases. Does that make us fit for specifying or implementing OpenGL? Hell no.

You know what people bare well recognized names in the industry? The promoting members of the ARB …

What does knowledge of GPU details have to do with API design? You are mixing two different things, drivers and APIs. When we talk about API design we talk from a higher abstract level that should serve a certain purpose regardless of hardware details…and remember not every hardware work the same. API interface = exposed functionality vs. how it works. Of course there are other aspects like what’s possible and what’s on current hardware but these are too general than being limited to only driver developers.
And remember we base the design from existing designs, otherwise how could we have “future suggestions forum” where none driver developers could suggest many useful features and doable (based on what we already know from another API on the same hardware). The point is let lets have the Academia, researchers, PhD people with a lot of experience in software engineering come up with a nice flexible design, and I’m not talking about your “professors” or mine. There are scientists in the field who can come up with the best designs ever!!! Rule of thumb, never let a hardware engineer do the software engineer’s job, though the opposite is possible.

What does knowledge of GPU details have to do with API design?

Performance. Just look at OpenGL 1.1 as an example.

How many features of 1.1 were never implemented in consumer hardware (at least, until they could be implemented via shaders internally)? You’ve got accumulation buffers and selection buffers, at the very least; any attempt to use these features was basically the kiss of death as far as performance was concerned. In the early days, falling off the “fast path” was ridiculously easy in OpenGL.

Indeed, it wasn’t until 1999 that OpenGL implementations could do T&L in hardware; until then, all that T&L was done on the CPU. Which skilled programmers could probably do just as well if not faster. From OpenGL’s initial release until the GeForce 256, vertex T&L was a liability to performance, not a benefit.

Look at how tortured a bad API design can get. The glTexEnv nonsense that was extended and extended until the combiner level where it was just a really horrible way to specify an assembly language for fragment shading. At one point in time, glTexEnv was a good idea. But it wasn’t very extensible and eventually led to horrible things.

Any hardware-based API needs input from the hardware makers themselves. You always want the lowest level API to be as close to the hardware as possible while still providing a reasonable abstraction. The people who best know how to make an abstraction are the people who make the hardware. Or at least have detailed knowledge of it.

All very nice in theory but the problem is this:

OpenGL has been there before and it didn’t work.

Separation of OpenGL specification from how hardware actually worked directly led to the API becoming more and more irrelevant and baroque, and was also a huge contributing factor to the rash of GL_VENDOR1_do_it_this_way, GL_VENDOR2_do_it_that_way and GL_VENDOR3_do_it_t’other_way extensions to meet the requirements of people who were actually developing and releasing programs that used OpenGL. At the same time as that was going on, D3D was tying itself closer and closer to how hardware worked and having OpenGL’s ass on a plate as a result.

What you’re proposing would be a return to the days when drivers would advertise GL_ARB_texture_non_power_of_two but not actually have it supported by the hardware. Yes, this really happened, and you had no way of knowing until you got that sudden crunch back to under 1 fps. You were obviously not around back then, but let’s safely assure you - nobody else wants to go back there.

So, given that it didn’t work before, given that the D3D approach is what has already been established in the field to be what actually works (and if you think GL core context creation is ugly you should see what D3D7 and earlier were like…), given all the good work done in the past few years to put OpenGL back into a position where it can at least try to be competitive again, and given that you completely fail to provide any compelling reason why an approach that didn’t work before should be any different now (and why all that good work should be undone), it has to be said that your position on this lacks any substance.

And who says that the hardware designers and implementors do the driver implementation? Do you think NVIDIA, AMD, Intel and so on do only employ hardware specialists? You need both parties.

There are scientists in the field who can come up with the best designs ever!!!

I seriously doubt that this is true. The best designs ever come from people who have years or decades of experience designing and implementing industrial strength software - not out of the ivory tower …

[QUOTE=thokra;1240405]I guess menzel needs to get another round …

[/QUOTE]

I have to stock up in popcorn…

A question from a novice who knows nothing and wants to learn from experts.

If I’m linking at run time to the OpenGL32.DLL using LoadLibrary, how can I use the wglGetProcAdress or even load other core profile functions or extensions?

First of all, why pull in the lib at runtime? Second, check this.

get it! so
hdll = LoadLibrary(“opengl32.dll”);
wglGetProccAdress = GetProcAddress(hdll, “wglGetProccAdress”);
then use wglGetProccAdress…

Customized error reporting in case OpenGL is not installed.

Or you can simply link to the export lib and use wglGetProcAddress directly.

You don’t install OpenGL; OpenGL is not software and opengl32.dll will always be present (unless you really must run on pre-OSR2 Win95 or pre-NT3.5). The real question is: does the user have an OpenGL driver for their hardware? If a driver is present then opengl32.dll just acts as a proxy for the vendor’s implementation, so options are to (1) check your pixel format flags, (2) check your GL_VENDOR string or (3) check for an absolutely ubiquitous extension that will be present on all civilized hardware released within the past 15-odd years - something like GL_ARB_multitexture should suffice. Of course, if you want to run on hardware older than that too then you’re probably going a little too far, and will have to cope with the joyous world of minidrivers (and don’t forget that some of these were game-specific and may not even have been called “opengl32.dll”), missing blend modes, software fallback left-right-and-center, and other weird behaviours from the time. Since that seems to be the OpenGL world you want to go back to, you may well welcome it! I wouldn’t. But that’s the only scenario in which OpenGL may be “not installed”.

Yup. So no body can just search for OpenGL32.dll file and hit Delete? :slight_smile:

I believe that opengl32.dll is covered by Windows System File Protection, so they’ll have to also disable that. If they go so far then you’re dealing with active user stupidity and it’s their problem, not yours.