Ugly Core Profile Creation

While struggling with running my own OpenGL API loader

What is it with people who want to constantly reinvent the wheel? The only reason I wrote an OpenGL loader was because GLEW was (and still kinda is) flakey with regard to core OpenGL function loading. I also took the opportunity to separate out core and compatibility so that you could get a core-only header if you choose to.

Trust me: writing your own loader will not give you wisdom. Completing this task will not move you towards enlightenment. It is not a worthwhile exercise that every OpenGL programmer should undertake. Use libraries to make your life easier; that’s what they’re there for. You should only write your own if there is some feature that you absolutely need which current GL loaders do not provide.

What is it with people who want to constantly reinvent the wheel?

Simply because the wheel they want to reinvet sucks.

Trust me: writing your own loader will not give you wisdom. Completing this task will not move you towards enlightenment. It is not a worthwhile exercise that every OpenGL programmer should undertake. Use libraries to make your life easier; that’s what they’re there for.

I would agree if there were an official (provided by the ARB) well maintained “library” or headers, up to date and clean.

Oh right, you wanna do everything yourself. Still, I fail to see the point. There already are good extension loaders out there and there are multiple windowing frameworks which already provide easy core context creation - on multiple platforms. So, what’s the reason for all this again?

Anyway, if you don’t see any defines, typedefs, symbol in general or whatever from windows.h, is it a problem to include it as well? If wglext.h doesn’t pull in windows.h I suppose there’s a reason for that.

Simply because the wheel they want to reinvet sucks.

Which wheel? GLEW? Well, did you ever think about just not creating a core context? Unless you can proove with hard facts that you get anything out of creating a core context performance wise, then just use a compat context und GLEW. Developing core conforming GL apps doesn’t mean you need a core context. There are many, many OpenGL apps out there using GLEW. In this case my personal advice is: Don’t blame your tools.

I would agree if there were an official (provided by the ARB) well maintained “library” or headers, up to date and clean.

There has never been an official loader. We’ve seen this discussion many times and the verdict always is: There’s enough out there which works perfectly fine.

Unless you can proove with hard facts that you get anything out of creating a core context performance wise, then just use a compat context und GLEW.

:doh:

There has never been an official loader. We’ve seen this discussion many times and the verdict always is: There’s enough out there which works perfectly fine.

:doh:

:doh:[/QUOTE]

Uhm … ok … :confused:

Simply because the wheel they want to reinvet sucks.

Reiterating Thokra’s question, which one? There are quite a few options available to you. Are you telling me that out of all of these options, none of them do what you need? Why? What exactly is missing from them?

Or to put it another way, what is your tool going to do that none of those other tools do?

Janika has got a valid point.

Current OpenGL is a pain in the arse to deal with. I personally have my own “opengl42.h”, “opengl3.3.h” and “opengl3.0.h” headers, each of which includes a small class to enable functionality and to allow querying of extensions.

Yes, there are many ogl loaders, but none of them actually work all the time. Defending them is pointless.

What is actually needed, is an ARB dll which does the work for a user, and returns valid function addresses depending on the context version required.

Yes, there are many ogl loaders, but none of them actually work all the time.

Here’s an idea; I know it’s kinda crazy but bear with me.

Explain how they don’t “actually work all the time.”

Simply stating that there’s a problem isn’t a way to get it fixed. Go to the GLEW guys and write them a bug. File a bug in the SDK. If you find a failing in a loader tell someone; don’t just keep it to yourself.

Or even better, why don’t you fix the problems in the loaders, and then make patches and submit them to their various maintainers.

What is actually needed, is an ARB dll which does the work for a user, and returns valid function addresses depending on the context version required.

Well that’s not going to happen. The ARB isn’t going to write code for us. So you can continue to write it yourself, or you can actually do something productive to the “not you” demographic.

I hear a lot of talk on this forum about how “somebody ought to do something about X.” But when push comes to shove, when it comes down to actually getting something done, it’s all talk, no walk.

We get the OpenGL that we have built. Which means we have the OpenGL we deserve. If you want it to be better, then do something useful about it.

Otherwise quit complaining.

This was a joke post, yes?

"Explain how they don’t “actually work all the time.”

It never works… call a function that isn’t supported, means you call an unsupoorted function! Gl_ERROR vs not supported. Call a Win16 function… it won’t work - rather than trick you into thinking it does work.

As to your reference to the GLEW library, I’ve never used it. Because it is out of date! It’s always out of date. Is that so hard to understand?

The ARB SHOULD write code for us. The future of OpenGL depends on it.

More to the point - if you read my post, you’d understand that I have alreay written my own code to support the various Gl versions!

Maybe you don’t appreciate the size of the problem. OpenGL faces D3D. Which works. out of the box. No fucking around. OpenGL doesn’t. Which is wholly down to the ARB. This is a driver API, not an OS one. Why don’t the ARB see this?

It never works… call a function that isn’t supported, means you call an unsupoorted function! Gl_ERROR vs not supported. Call a Win16 function… it won’t work - rather than trick you into thinking it does work.

I don’t understand what you’re saying. What do you mean by “a function that isn’t supported?” And what does Win16 have to do with OpenGL?

As to your reference to the GLEW library, I’ve never used it. Because it is out of date! It’s always out of date. Is that so hard to understand?

It is? In what way is it out of date? It seems to work. It has some quirks with core contexts, but they have a workaround for it, and it’s documented. So in what way is it out of date?

More to the point - if you read my post, you’d understand that I have alreay written my own code to support the various Gl versions!

Yeah, so have I. Unlike you however, I documented my code and gave it away, thus solving the same problem for other people.

What have you done to make OpenGL better for others?

Maybe you don’t appreciate the size of the problem.

I understand the problem. But complaining about it isn’t solving it.

This “problem” has existed since OpenGL 1.2 fourteen years ago. The ARB hasn’t solved it. When someone hasn’t done something for 14 years, you might think it would occur to people that this means they aren’t going to solve it. So either we, the OpenGL community, gets together and solves it, or nothing is going to change.

I’m doing my part. The folks who maintain GLEW are doing their part.

Are you going to pitch in? Or are you just going to sit on the sidelines and complain while nothing gets done? If you are, I wish you’d do it somewhere else; the complaints to nobody that’s listening are becoming tiresome.

The future of OpenGL depends on it.

Please. The “future of OpenGL” is exactly where it is today, regardless of what the ARB does. Do you think that if there were some widely adopted library for loading functions (which GLEW technically is, since 90+% of tutorials will direct you to it) that this would magically cause AAA game developers to start using OpenGL? That it would increase OpenGL usage among people who don’t already use it?

No. People aren’t going to rewrite their codebases to use OpenGL just because we have a function loading API signed off on by the ARB. OpenGL is the only means for accessing hardware accelerated 3D graphics on non-Windows platforms. In most cases, this is the only reason it is used: to write platform-neutral 3D code, or to write Linux/MacOSX-specific 3D code.

So either the scope of your project requires OpenGL (because it’s on a platform where there are no alternatives), or you’re using Direct3D. That’s not “the future”; that’s the last 5 years.

The ARB SHOULD write code for us. The future of OpenGL depends on it.

And what would you have them write?

An official function loader? Simply not necessary as there are tons of loaders out there. Use Alfonse’s one, use GLEW, use gl3w - if none of these seem to provide what you need, you can get query whatever function pointer you need at runtime yourself or do what Alfonse suggested, submit bug report or patches and do something for the community if you feel that the community doesn’t get what it needs with the current products. BTW, stating that GLEW is outdated is simply not true as it fully supports compat and core OpenGL 4.2. If you need up to date extension function pointer, just query what you need yourself. It doesn’t mean you have to rewrite everything under the hood just because one or two vendor-specific extensions are not yet supported.

What else? An official implementation? Something equivalent to XNA? Other arbitraty libraries providing functionality based on the API?

I still fail to see the point.

OpenGL faces D3D. Which works. out of the box. […] OpenGL doesn’t.

It does? Without any actions by IHVs? Out of the box? Just by installing Windows? Why doesn’t OpenGL work that way?

No fucking around.

Good point. Then please stop doing that and explain your claims instead of just shoving them down out throats and giving us unwarranted face palms.

Let me make my point clear. First I’m saying I want to roll my own GL function loader because other existing libraries cannot do the job. However I tried GLEW and there’s a known issue with it when used with 4.2 core profile, it calls some deprecated function.
As suggested I could report or even correct the problem myself but I thought of the amount of work and decided to write my own library, at least something I fully understand and maintain. Running my own was even more painful process as I has to extracts the relevant definitions, types, and function names from ARB ancient messy headers that clutter old and new functionality all together. Whether I could accomplish it or how, or should I make it public or use others attempt is not the point. The point is why on earth a client of the library has to write their own? Whether it’s me, John, Dave, Sandra, …or the existing loader creators, why we wanted to write our own regardless of make it public or private??? IHVs implement the API but they don’t define values and headers, they just implement existing stubs. So you telling me the library client should either shove it up their throats and use whatever is there or write their own??? Do you guys forget that a library should always provide a clean interface to the client? Software Engineering 101??? Helloooo???

And yeah please don’t argue with me about why using core or mixed profiles. This is something the system requirements decide. If using core profile has no advantage then scrap it and get everything in one big messy ball. This probably will make our life easier.

BTW can anyone name some real professional applications that use OpenGL functionality beyond 2.1?

If you cannot make your recent library features easily accessible then you will find no consumers. Imagine a gaming console that’s very powerful but users have to in stall and fix ports themselves so that they can plug in their game controllers to it. Do you think anyone will bother use it except for the gang who created it? :whistle: Get the point???

Yes, GLEW on core is broken, but it’s not too hard to fix AND there are alternatives. Yes, you might need to use core (MacOS X). Rolling your own extension loader is not the simplest solution to replace GLEW as other alternatives exist, thats all that was said.

Yes, a cross-plattform ARB loader would be nice but history tells us it’s not going to happen, no need to argue with the persons on this forum about it! If you like, file a bug report: https://www.khronos.org/bugzilla/ .

I can’t tell you which ‘professional’ applications use OpenGL > 2.1 as I don’t know what you define as ‘professional’ (there are some 3.2 games but sticking with 2.1 might have more to do with sticking with DX9 due to consoles as anything else…).

The theory is that it could have. However, to date there is no indication that using a core profile context actually gives you a noticeable advantage in regards to rendering performance - neither with AMD nor NVIDIA hardware (Intel … well… ARM, don’t know.). If such proof is out there, please, please, share your findings.

As suggested I could report or even correct the problem myself but I thought of the amount of work and decided to write my own library, at least something I fully understand and maintain.

You think the amount of work fixing the dysfunctional (depending on the way you look at it) part of GLEW like stated in the corresponding ticket is more work than implementing a complete library from scratch? Really? It is beyond me why this admittedly quite simple fix isn’t in yet but there you go, a few lines of code fix the whole problem. Yet you want to do it from scratch. So, I continue to state: I fail to see the point. I don’t want to degrade your efforts but I can’t help but feel that you’re wasting your time and, if this is work done for a customer, you’re also wasting money.

Running my own was even more painful process as I has to extracts the relevant definitions, types, and function names from ARB ancient messy headers that clutter old and new functionality all together.

If you realized that, why would you stick with your approach then? This completely contradicts your assumption that writing your own stuff is easier and saves you something.

Helloooo???

Hey there!

And yeah please don’t argue with me about why using core or mixed profiles. This is something the system requirements decide.

True. I just don’t see why such a requirement would be made by anyone knowing what they’re talking about. Again, you can write perfectly fine core GL code without a core context. Again, it has yet to be proven that a core context actually has real benefits.

However, to date there is no indication that using a core profile context actually gives you a noticeable advantage in regards to rendering performance - neither with AMD nor NVIDIA hardware (Intel … well… ARM, don’t know.). If such proof is out there, please, please, share your findings.

Again, it has yet to be proven that a core context actually has real benefits.

Could you please tell me why it’s there then? I’m not saying that I’ve found any benefits from using core profile, but I’m basing my argument on that fact that there’s a core profile, and hence assuming a separate render path which theoretically should be faster.

I can’t tell you which ‘professional’ applications use OpenGL > 2.1 as I don’t know what you define as ‘professional’ (there are some 3.2 games but sticking with 2.1 might have more to do with sticking with DX9 due to consoles as anything else…).

Professional is simply judged by how much it costs? So we cannot consider that GL screen saver, free molecule visualizer, or Tettris clone… as valid examples.
I would like to hear something like Maya, Softimage, 3DS Max, ZBrush, Lightwave3D, Chief Architect, and other CAD/CAM high end software.

I’m very familiar with some replies I will get, “Will never happen,” “Don’t complain to us,” “They don’t care,” “Use Direct3D,” …etc. I’m saying it now and I’ve been always trying to make it clear that I’m not complaining to you, when I write here about a problem I’m trying to make it clear to ARB ppl that, sorry to say this, your move beyond version 2.1 was a horrible mistake, at least the design is just incompetent. Unless you are an ARB member you should have no prob with this.

Could you please tell me why it’s there then? I’m not saying that I’ve found any benefits from using core profile, but I’m basing my argument on that fact that there’s a core profile, and hence assuming a separate render path which theoretically should be faster.

Theoretically that may be true. Using a core context does have a meaning. For instance, using legacy functions when a context is current will generate an error, so you can enforce a policy, i.e. to only use core functions. Also, you loose unnecessary buffers like the accumulation buffer and auxiliary buffers but that’s not again either because you’re not forced to use them with a compat context either. The fact seems to be that no IHV which implements the core profile does actually provide such an optimized core path. I have yet to see a performance gain on either GeForce or Radeon with a core context - so I assume it simply isn’t there. Can’t speak about Apple and Intel is … well, Intel. This is not to say that writing core conforming applications isn’t advisable - quite the contrary. Nowadays I’d always go for that. Granted, there are some extensions which give you useful tools but aren’t necessarily completely core conforming, like GL_EXT_direct_state_access.

We got some employees of both major companies here so maybe the guys can garnish this discussion with a little more technical foundation and insight.

Unless you are an ARB member you should have no prob with this.

Since this is an open discussion forum, it’s up to the reader to decide what’s relevant to them and what’s not. Some may simply ignore it but some may simply object because they feel something isn’t portrayed correctly. Personally I like to see inaccuracies being dismantled by experienced members of the community - especially my own mistakes, simple because it widen my horizon in some cases.

[…]your move beyond version 2.1 was a horrible mistake, at least the design is just incompetent.

I think very few would disagree that compared to GL 2.1, GL 4.2 is a huge step into the right direction. Legacy GL is what’s horrible. Yes, we all know the countless discussion about how the spec is imperfect and how it would be much better to do this and do that to improve it but IMHO GL4 is a much, much better API. Why do you feel that GL3+ was a horrible mistake to make?

Why do you feel that GL3+ was a horrible mistake to make?

The idea of keeping legacy stuff by means of compatibility profiles shows that the ARB was not so confident about the new API.

Unless you are an ARB member you should have no prob with this.

My problem with this is that you’re using our forum to talk to people who may well not be paying attention to it. Or to put it another way, your words will only possibly be read by the people you actually are complaining to, while they will certainly be read by the people you’re not complaining to. How does that make sense?

If you want to talk to the ARB directly, the Khronos Group’s website has contact information. In short, use the proper channels to talk to the people you want to talk to.

You say that as though it was the ARB’s idea and not NVIDIA shoving it down their throats. They publicly spoke out against deprecation and removal and basically sandbagged any effort to force people to upgrade by saying that they would support legacy stuff in perpetuity. After that, there wasn’t much choice except to create the compatibility profile, since it was going to be a de facto construct anyway.

It is not a short story that can be told in a few sentences here, in the forum. The period from the late 2006. and August 2008. was a very dramatic one considering OpenGL and it’s perspective for further development. Unlike D3D, OpenGL is a standard developed by a consortium. New hardware, a more efficient rendering and a decade old API with hundreds of functions and multiple paths to accomplish the same thing were arguments to cut down drivers and make them more efficient (“lean and mean”). But, on the other hand, there were a lot of “strong players” with products based on legacy OpenGL code and years of good reputation and tons of code that would be thrown away instantly with the radical change in the API. Forces for leaving support for the legacy API prevailed. That’s how profiles are born.

Since drivers have to support both core and compatibility profiles, there is little chance to make significant difference in the performance yet. If you are making a software that needs some functionality don’t hesitate to use compatibility profile. Your users will not be aware of it, but they will certainly notice lack of visual elements. If you need clear path and want to squeeze better performance, use core profile and follow your own way to the solution.