New Headers

I’m not entirely sure how they intent to ‘package’ OpenGL 2.0, I personally think this is a great opportunity to change some of the internals, but others might disagree (since if it works, don’t fix it). However, on another note, I personally think that there should be a brand new OpenGL 2.0 SDK distributed here or on SGI’s site.

Let’s say that GL2’s functionality is exposed via extensions, in a way it might as well just be OpenGL 1.6. So if that is the case, I would suggest making a gl2.h for people to include which automatically gets pointers to all the core functions that should be supported by a GL2 card. And, if a certain platform isn’t supported by the new header, people could easily add that support, and if not, they could still get access to the functionality by using traditional methods of getting pointers to the functions.

Well, there’s my two cents, I personally hope that they intend on reviewing the GL 1.x (internal) code for GL2, but I think there should at least be some official new headers available for use with GL2.

  • DarkJedi

Well, I hope they will unify everything into
gl.h (and kill off glext.h). No more glati.h and nvidia’s own glext.h

and put OS specific parts (if any exist) in another as it’s being done now.

And finally, provide a new DLL and lib for Windows. I find this loading function pointers deal stupid.

Getting extension pointers isn’t going anywhere anytime soon. That’s the nature of dealing with a .dll file that may export any number of arbiturary symbols: there’s no way to automate the process of getting function pointers. Let alone the minor problem of the fact that not all versions of OpenGL expose the same extensions.

You will simply have to continue to do it yourself.

Originally posted by V-man:
Well, I hope they will unify everything into gl.h (and kill off glext.h).

I’d prefer a puregl.h with the new core functionality, plus a gl.h which #includes puregl.h and adds in the backward-compatibility stuff.

I also think it’s also hugely important that GL2 has an import library and an interface to the WGL functionality which aren’t held hostage (technically or legally) by MSFT. Even if hell freezes over and they update opengl32.dll etc, I don’t think anyone’s going to trust their commitment in the future.

I’m glad there seems to be others who agree with me, but, updating the gl.h is not the way to do it.

They can’t update the existing gl.h cause it would cause too much trouble. Picture someone trying to compile an engine written in GL2 using his gl.h (which is the original shipping with VC6). I would say it would be feasible to create a gl14.h for gl 1.4, or a gl20.h for gl 2.0.

But they can’t update the gl.h that exists now, because that will cause different people to be running different versions of the headers, and there will be no version control system in place to keep track of which headers you have.

Hence, coming back to my original suggestion, they should release either a gl2.h or a gl20.h for OpenGL 2.0 which define’s all the core parts of the library, and have a function ( say gl2Init() ) that can get pointers to all the core functions that should be supported by a GL2 card. (and of course using #define’s this can be made cross-platform compatible)

By having a separate header for GL1 (gl.h) and a header for GL2 (gl2.h) it will be easier for people to know what version of GL an engine was written for, and, since the core functionality is initialized in the header, could help with cross-platform compatibility.

They can’t get rid of the extension mechanism though, it is a core (and essential) part of the GL library, I only suggested that the gl2.h header should automatically get the core function pointers.

Or, as I’ve said, perhaps a gl20.h would be better cause when GL2.1 is released, then a new header gl21.h could be released.

Only a thought though… think it over.

  • DarkJedi

[This message has been edited by DarkJedi (edited 03-01-2003).]

I’ve spoken to Forest ‘LordHavoc’ Hale, and, with his permission, I am pasting his reply below…

what I would suggest is:
gl200.h
gl200.c

gl200.h defines all the functions as being local to the program
gl200.c provides a function called gl_init_200 (as well as gl_init_140, gl_init_131, gl_init_130, gl_init_121, gl_init_120, gl_init110) which fetches all the functions that comprise gl 2.0.0 (or respective lesser version) and errors out (returns an error code) if anything is missing

the great thing about this is that gl200.c would have all those functions call the next lesser init function, so the result isn’t much enlarged compared to normal

also the functions would probably take a dll name to open (opening NULL gets you the system default)

anyway, those files provide all the functions that are in the core feature set of each respective version, so the program just happily uses them like normal after it has called gl_init_200 or similar

and it would be good if these files are upgraded fairly often to add gl_init_ext_ functions for each extension, which would operate as expected (of course this would also provide function pointers for those too)

this would finally make it easy to use opengl extensions and core features beyond 1.1.0

note that all gl function declarations in those files would be function pointers, and obviously only valid once the init function for the respective feature set is called

And precisely who is going to control these files? Individual driver developers? Every driver release (that involves new extensions) would have to have a new .c file. Installing a driver would now have to decide where to put these .c files. It would, also, require recompiling code with the new extension registering functions.

The ARB might do it, but the ARB is a slow-moving body that would only infrequently update these files (every 6-12 months or so).

Really, how hard is it to write code to add new extensions? Is it really something you want driver writers and so forth spending precious time on? Besides, if I recall, nVidia’s SDK has an extensible extension registering library in it. You can expand it to include ATI or anyone’s extensions.

This functionality doesn’t belong in official OpenGL distributions.

Well, initially, a few developers here would start writing the new headers (I’d be happy to coordinate the project, but I’d have to wait a few weeks to see if I’d have time…)

Once the initial library is built, it can dynamically load the entire gl library, and would be maintained either by the few developers working on it. Or by SGI as they approve new extensions.

I’m starting to think that they should be released as one set of headers, newgl.h, or something, which gets updated frequently on the opengl.org site or of course on sgi’s site. Obviously with a new revision to opengl being released, one could download the newer header and drop it in as a straight replacement (since the old function for loading the older library would still exist, if you wish to use all the latest core features, you run the new function, and that calls the earlier function etc). So when OpenGL 1.5 comes out for example, they’d add a init_gl_150() function which calls init_gl_140() and stuff like that. So it will be an easy header to maintain.

By doing this people could also not have to manually dynamically load GL, but instead could do extra things like specifying the GL library they’d like to use (especially useful on linux where different flavours of GL is commonly used). I find everyone who’s even half serious about using OpenGL ends up loading the entire library through the extension mechanism since this solves some cross-platform compatibility issues. And of course then you also have the option of choosing which GL library you’d like to use.

So what I’m proposing is a new header, that completely dynamically loads OpenGL, with its extensions, and obviously returns false if some of the core extensions are not available.

Hope that helps clear things up.

  • DarkJedi

[This message has been edited by DarkJedi (edited 03-02-2003).]

You were clear before.

You still haven’t explained how this is, really, in any way different from the functionality that nVidia’s tool and library provides. You request a particular extension’s functions to be loaded. You can walk the entire extension string to load them all.

I find everyone who’s even half serious about using OpenGL ends up loading the entire library through the extension mechanism since this solves some cross-platform compatibility issues. And of course then you also have the option of choosing which GL library you’d like to use.

Why bother? You load the extensions you need and ignore the rest. How often have you used such esotheric extensions like NV_texgen_emboss or that ATi envbump extension?

Not only that, you haven’t answered the vital problem of a driver coming with a header and an object file, along with the requisite .dll’s.

Also, you don’t seem to understand where extensions come from. SGI has little input into any particular extension. I’m not firmly knowledgeable about the extension approval mechanism, but I wouldn’t be terribly surprised if the ARB itself didn’t have much to say about any particular implementation developer exposing whatever extensions it wanted.

BTW, the header files are already written (see nVidia or ATi’s website for headers that have their extensions built in). It’s the actual function pointer loading that is missing. That isn’t going into a header file (or, if it is, it’ll be a very big header file. Preferably one with a special define so that it only gets compiled once).

And what happens if a driver developer decides to depricate an old driver? Do programs that use that new driver mysteriously fail (since they can’t error-check the extension process)?

What about nVidia’s predilection for exposing extensions, not on the ARB’s timetable, but on their card release schedual? Would you then need a “init_gl_nVidia_detonator_version_number” function?

Of all the things to complain about in OpenGL, having to get function pointers is rediculously far down the list in terms of importance. It’s a, “I wouldn’t mind having it, but I won’t complain much about it”, rather than a “need”, or a strong “want”, or even a “suggestion.”

Korval, you’re kinda missing the mark here, my explanations are obviously not good enough, so I’ll try again. The new headers has nothing to do with vendor specific extensions. The new headers simply implements dynamic loading of the OpenGL library. So ALL core functions are loaded using the extension mechanism (even glVertex3f() etc.)

This simply means that instead of linking to opengl32.lib, you just #include <GL/newgl.h> and call a simple init function which loads the OpenGL dll on the fly.

Doing it this way has several advantages:

  1. You don’t link to any specific GL libraries, thus allows you to choose your desired library at program startup (as apposed to being forced to use a specific GL library, eg. Microsoft’s implementation vs. SGI’s implementation)
  2. Solves some problems under linux when compiling using the nvidia gl libraries and intending to distribute executables.
  3. Makes it easy to maintain one set of headers that work across all platforms
  4. This is a common way to load the GL library and has been used by people like John Carmack, and hence, is definately a tested and working solution
  5. Some 3D-only addon cards (like the Voodoo 1/2 cards) could only use OpenGL when the dll was dynamically loaded.

So in effect, my proposal doesn’t affect extension loading in any way, it simply suggests that everything should be loaded using this method, which clearly, offers many advantages.

  1. You don’t link to any specific GL libraries, thus allows you to choose your desired library at program startup (as apposed to being forced to use a specific GL library, eg. Microsoft’s implementation vs. SGI’s implementation)

I think the current mechanism of loading a library based on a pixel format is much better. You don’t have to know which library you want (ATi’s, nVidia’s, somebody elses, etc); it just magically gets the right one behind the scenes.

Do you really want people to have to know where ATi/nVidia/everybody else stores their .dll’s, what they are called, etc, just to use OpenGL?

  1. This is a common way to load the GL library and has been used by people like John Carmack, and hence, is definately a tested and working solution

Just because it has been done before doesn’t mean that it is a good solution. Carmack gets to be privy to information that we, as regular programmers, don’t. So it’s not really surprising that he’s willing to load the .dll himself for whatever reason.

  1. Some 3D-only addon cards (like the Voodoo 1/2 cards) could only use OpenGL when the dll was dynamically loaded.

So? OpenGL should not be pandering to dead cards like the Voodoo 1/2.

DarkJedi wrote:
This simply means that instead of linking to opengl32.lib, you just #include <GL/newgl.h> and call a simple init function which loads the OpenGL dll on the fly.

You are reinventing SDL and I have the hunch you’ll be doing it poorly.

  1. Solves some problems under linux when compiling using the nvidia gl libraries and intending to distribute executables.

Care to elaborate?

  1. Makes it easy to maintain one set of headers that work across all platforms

Well, <GL/gl.h> works under all platforms. It’s Windows programmers that don’t understand how the extension mechanism works that end up making a mess out of it. You mentioned glext.h. You are aware that’s a non-standard header, right? There’s no such thing on IRIX for example.

  1. This is a common way to load the GL library and has been used by people like John Carmack, and hence, is definately a tested and working solution

Dunno what you are trying to say… I guess you mean writing a wrapper arround dlopen (and all the other functions that do the same job). Pling! That’s SDL. Yeah, yeah, I know, you don’t want all the other stuff SDL has, you just want to load the GL library. But the point stands.

So in effect, my proposal doesn’t affect extension loading in any way, it simply suggests that everything should be loaded using this method, which clearly, offers many advantages.

Clearly is a relative term. The other part of your proposal has been done, too. GLext or something like that, it appears every now and then on the front page of this very website.

Korval, you don’t need to know where everyone keeps their dll, in quake 3, does Carmack need to know where the user puts the OpenGL dll? No, you can make it load the default (opengl32.dll on windows) if the user passes NULL to the function (in which case, it would automatically choose the best/default one).

As for Pixel formats, I have no idea why you’re even mentioning them? Dynamically loading the GL library still requires you to choose pixel formats etc. just like you did before, nothing’s changed. Like, no offence, but I get the feeling you’re not sure what’s going on, and instead, you’re arguing about things you’re not sure about. It’s not that difficult, the only thing you would need to change in your programs is that you don’t link to opengl32.lib, but instead call a simple init function.

As for your comment: “it just magically gets the right one behind the scenes”, no it doesn’t, it gets opengl32.dll in the background. As in, microsoft’s implementation, so what happens with people running Mesa’s, SGI’s, or 3dfx’s OpenGL? They’re screwed, they could either 1) rename the appropriate dll to opengl32.dll or they could 2) change the dll that is loaded at program startup IF the programmer didn’t link directly to the Microsoft implementation. If you linked to opengl32.lib, it will load opengl32.dll, I really hope you at least get that, I mean, this isn’t even OpenGL theory, this is basic C theory. If you link to a .lib it will use one specific dll, in your case, I’m guessing opengl32.dll (if you link to microsoft’s implementation).

As for the Carmack comment, it was just an example. Most (as in 99% of the) professional 3D game engines load the library dynamically instead of linking directly to a specific GL library. So, if you wanted to (on Serious Sam for example), not use Microsoft’s opengl32.dll, I could tell it to use SGI’s implementation, or Mesa’s, or even 3dfx’s (though I don’t use Mesa personally apart from when I wish to have a look at how a GL library would implement something).

And on the topic of 3dfx’s cards, you can’t say “OpenGL should not be pandering to dead cards like the Voodoo 1/2”. Because reality exists, and a lot of people who build real engines, need to make sure that these older cards are supported, even if only to a minor extent if at all possible. And I’m not saying OpenGL should be maintained for them, that’s impossible since the 3dfx implementation is closed source. But the point is, if you’re not dynamically loading the GL library, your engine WON’T support the vooodoo 1/2 cards because it will try to load the WRONG dll.

I don’t see why most newbies don’t get the whole dynamic loading thing, or see where it has advantages over the other method. If you get extensions, surely dynamically loading OpenGL shouldn’t be so hard to comprehend.

Let me ask you a question, have you ever dynamically loaded the functions from the GL library that you will be using? Like, glVertex3f() etc? Didn’t think so, how do I know? Because you keep getting your facts wrong. Please research the topic before posting more comments. It’s simple, open a web browser, go to google, and search for info on dynamically loading dll’s.

I wish I could spend more time explaining, but I am very busy and really don’t have the time. Please, could we keep this thread limited to advanced OpenGL users? It’s not a thread intended for people still learning the API or extension mechanism. This thread was aimed at people who have already dynamically loaded their dll’s or at least understand the basic C fundamentals as to why programmers would want to do this.

  • DJ

[This message has been edited by DarkJedi (edited 03-04-2003).]

Finally some intelegent comments, thanks for replying m2.

Yes, I realize that SDL supports dynamically loading the GL library, I haven’t used SDL myself (since I’m not allowed to cause it provides too much functionality, and I’m supposed to be writing everything myself).

The idea is to get rid of the common problems plaguing the current method of loading OpenGL dll’s via linking on a lib file. As soon as you link to opengl32.lib, you make your software load opengl32.dll, when that should really only be a default, not a requirement.

This goes directly against everything that OpenGL stands for, the idea is to have an open system, and one that is hopefully flexible enough that I can choose which libraries I want to run at startup.

That is the main advantage, and I’m not gonna list the others cause I’m supposed to be making this virtual glove work at the moment. You seem pretty intelegent, so I guess I won’t have to repeat myself 3 times .

But yes, implementing something like SDL’s mechanism in a header file type form is the idea. So the user could either say, load this OpenGL dll, or of course he could say nothing and get the default opengl32.dll. Of course to my knowledge you still have to manually get pointers to all the functions even in SDL. I’m saying a header that does all that for you.

Thanks for replying,

  • DJ

[This message has been edited by DarkJedi (edited 03-04-2003).]

Yes, I realize that SDL supports dynamically loading the GL library, I haven’t used SDL myself (since I’m not allowed to cause it provides too much functionality, and I’m supposed to be writing everything myself).

Thought so :slight_smile:

Back to the more fundamental point as to why this is needed… I still don’t see it. Let’s try another angle. Why does, for example, Q3A do what it does? (Dinamically loading the OpenGL library instead of directly linking to it) Twofold: 1. It’s a commercial game, this kind of application can’t afford to die with something like “can’t resolve symbol glSoAndSo, aborting”, instead it wants to pop up a dialog, put that message there, maybe provide a hint regarding the existance of a FAQ and then die. 2. Q3A is programmed as if multitexturing was an extension (in other words, it also runs on hardware with a single texture unit). Remember that multitexturing is an extension to OpenGL 1.0 (or was it 1.1?) Since the POS that’s the standard OpenGL library on Windows comes back from the stoneage, this is the standard case on that platform. I guess it’s ok to cather for legacy systems.

Of course to my knowledge you still have to manually get pointers to all the functions even in SDL. I’m saying a header that does all that for you.

Sure, SDL only takes care of loading the GL. For the extensions you need what I meant before, namely http://www.levp.de/3d/. It just needs some clean-up (and it needs to be made less Windows-centric, too). For example, AFAICT, it won’t work properly if the system’s GL is 1.4, because in 1.4 the multitexturing macros are defined in GL/gl.h for backwards compatibility and the functions are exported by the DLL.

It would be indeed nice if this could be standarized. The current method of picking single functions out of an extension might have been ok when extensions where rare. Nowadays it’s too cumbersome. I find it funny when DirectX people complain that OpenGL’s extension mechanism is too complicated, when they have the exact same thing, only under a different name (capabilities). The key difference is that DirectX does the repetitive work for them, whilst OpenGL doesn’t, which is in turn, the core of your proposal.

There are some things, you rather shouldn’t have said.

Originally posted by DarkJedi:
If you linked to opengl32.lib, it will load opengl32.dll, I really hope you at least get that, I mean, this isn’t even OpenGL theory, this is basic C theory. If you link to a .lib it will use one specific dll, in your case, I’m guessing opengl32.dll (if you link to microsoft’s implementation).
No.
It will load whatever at load time is called opengl32.dll, is in the path and exports all wanted symbols. In case you restrict your function imports to some baseline, this may very well be a mini-GL. Have a look at the GLQuake (part I!) distro, and look at how it works on Voodoo Is.
Right: the mini-GL is called OpenGL32.dll, is in path, and exports all the needed symbols.

Originally posted by DarkJedi:
This thread was aimed at people who have already dynamically loaded their dll’s or at least understand the basic C fundamentals as to why programmers would want to do this.
Basic C fundamentals - such as “Never put non-inline function bodies into a header, never define vars (such as void (*glVertex3f)(float,float,float); ) in a header.”?

That’s exactly what I said… it loads opengl32.dll… dunno how that’s different from what you’re saying. And I know it doesn’t matter as long as it exports the right stuff… which is why I said you have two options… either rename your dll to opengl32.dll (if you read further) or <second reason>

As for the header issue, that’s a matter of opinion… in my opinion .

  • DJ

It’s not that difficult, the only thing you would need to change in your programs is that you don’t link to opengl32.lib, but instead call a simple init function.

A “simple” init function?

The only reason that pixel format stuff works on most OpenGL .dll’s is because Microsoft’s OpenGL32 mandates that implementers add this functionality. Voodoo1/2 mini-GL drivers probably don’t have these functions. And, even if they do, they could very well be different functions from the wgl* functions.

As such, the init function must somehow know whether the implementation is one that would fit into GL32, or whether it has separate pixel-format setup code, or otherwise know what functions it needs to pull out of the .dll in order to do initialization calls.

Though all GL libraries implement the same functions with (hopefully) the same .dll export names, they do not have to all implement the same extra functions (like wgl*, etc). Only those that would function under the OpenGL32.lib system are forced to do that.

And on the topic of 3dfx’s cards, you can’t say “OpenGL should not be pandering to dead cards like the Voodoo 1/2”. Because reality exists, and a lot of people who build real engines, need to make sure that these older cards are supported, even if only to a minor extent if at all possible. And I’m not saying OpenGL should be maintained for them, that’s impossible since the 3dfx implementation is closed source. But the point is, if you’re not dynamically loading the GL library, your engine WON’T support the vooodoo 1/2 cards because it will try to load the WRONG dll.

And my point still stands: who cares?

Nobody’s making games out of MesaGL (and those who are already are linking to it directly). SGI’s GL is very much unsupported. And, nobody really cares about 3DFx-GL support on Voodoo1/2’s. The low-end target card for most developers these days is, at worst, aTNT2.

If you linked to opengl32.lib, it will load opengl32.dll, I really hope you at least get that, I mean, this isn’t even OpenGL theory, this is basic C theory. If you link to a .lib it will use one specific dll, in your case, I’m guessing opengl32.dll (if you link to microsoft’s implementation).

It may load OpenGL32.dll, but, from that, it will eventually (when I request an appropriate pixel format) load my nvgl32.dll or whatever nVidia calls their OpenGL .dll. As such, it does indeed, “magically get the right one behind the scenes.” Whether that required OpenGL32.dll, ScrewYouGLUsers.dll, or whatever, it got what I wanted. Do I, as a programmer, really care that OpenGL32.dll was loaded? No. All I care about is that my code links itself properly to whatever implementation is correctly installed on the target system at runtime.

Let me ask you a question, have you ever dynamically loaded the functions from the GL library that you will be using? Like, glVertex3f() etc? Didn’t think so, how do I know? Because you keep getting your facts wrong. Please research the topic before posting more comments. It’s simple, open a web browser, go to google, and search for info on dynamically loading dll’s.

No, I have not dynamically loaded core OpenGL functions (because, surprise, OpenGL32.lib does it for me. Why should I waste my time?) Yes, I do understand .dll’s and dynamically loading them. Indeed, considering that I have written .dll-plugin apps (both the main apps and plugins), I would say I’m very familiar with dynamically loading .dlls.

This thread was aimed at people who have already dynamically loaded their dll’s or at least understand the basic C fundamentals as to why programmers would want to do this.

Most of the people who use OpenGL, on any level, do not dynamically load their .dll’s. Why? Because there’s no reason to. The OpenGL32.lib mechanism works for nVidia, ATi, 3DLabs, and most important other manufacturers.

I suppose I could go spend the time to do dynamic .dll loading. But, since the result, for the people I actually care about using my software, is exactly the same, what good does it do me?

Most (as in 99% of the) professional 3D game engines load the library dynamically instead of linking directly to a specific GL library.

These are engines, not users. By definition, they have to be versatile.

Also, it is a legacy feature. Once upon a time, the Unreal and Quake engines had to support old Voodoo 1&2 cards. Now, they don’t, but there’s no reason to throw away perfectly good code. So, that functionality still exists. Indeed, I would imagine that the core GL-loading functionality is so deep in the engine that they haven’t looked at it for years. It isn’t as though they intended to still support loading non-GL32 stuff; they just had more important things to do than take it out.

What you’re looking for is an alternative OpenGL32.lib that allows the person to select a .dll. If that’s all you wanted, why all the stuff about headers and other crap that is, ultimately, unimportant to the actual functionality of being able to select a .dll? Such a thing is way too large to go into a .h file anyway (especailly since it’d have to have special-case stuff to handle something like Voodoo1/2 GL .dll’s).

Your first post was rather cryptic in what you’re looking for if all you’re asking for is an OpenGL32.dll that takes a string parameter.

Korval, I’ve already wasted enough time on you, I skimmed over your posted message, and you’re wrong on every account.

And I really don’t believe you’ve dynamically loaded any dll’s. It sure as hell doesn’t seem like it from the stupid arguments you put forth.

Pixel formats have nothing to do with Microsoft’s implementation of OpenGL, they are used on ALL OpenGL implementations (on Windows at least, pfd’s are not part of OpenGL but part of the GDI)

And don’t be a dumbass, why should we not support the Voodoo 1/2 cards or the other cards with mini-drivers? It is as simple as giving the user the option to change which dll they wish to load. If you tell it to load opengl32.dll, it is functionally identical to how programs linked to opengl32.lib works now. Of course in the spirit of OpenGL and cross-platform compatibility you wouldn’t actually hard code it to load that dll, well, you might.

And again, for the second time, it doesn’t magically load the right dll, it loads opengl32.dll and expects it to handle the rest, unfortunately some cards don’t work via opengl32.dll.

And for the second time, nothing changes pixel format wise when you dynamically load OpenGL. You still go through the same procedure as you’ve done before, so I don’t see why you keep going on about pixel formats (And I’ve told you this twice!). Pixel format descriptors are not even part of OpenGL, they’re a part of the Windows GDI.

The only difference is that instead of forcing your application to use opengl32.dll, you can tell it to use any version of the gl library.

As for no one using Mesa… You are aware that mesa is the most widely used OpenGL implementation under the linux platform right? I hope you don’t bother replying but I can already see you saying ‘Why should we support the linux platform’.

So please, stop dominating this thread with incorrect, and meaningless crap. If you don’t want to load OpenGL dynamically (not like you even know what that means) then don’t reply to this thread. You don’t have a clue what you’re doing, and you haven’t done the research I asked you to do before replying again.

Some of us actually have things to do, so unless you have anything correct and intelegent to say, please stay out of the conversation. And keep your replies short and to the point because random rambling just wastes my (and everyone else’s) time.

Try and actually post something constructive for a change (to some other thread please).

  • DJ

[This message has been edited by DarkJedi (edited 03-07-2003).]

Pretty abusive today. Can you stop that?

Korval has some points you should take more seriously, IMO.

“Pixel formats have nothing to do with Microsoft’s implementation of OpenGL, they are used on ALL OpenGL implementations (on Windows at least, pfd’s are not part of OpenGL but part of the GDI)”

The pixel format you choose on a display context is a modifier to the choice of GL implementation. The popular example is 16bit color + stencil on a Gf2MX or something. It does not even go through to NVIDIA’s ICD, as is indicated by the renderer string. So it chooses the MS implementation. Based on pixel format. Now you could argue that you just would load NVOGL.DLL yourself and hope that it falls back to software itself. But …

“nothing changes pixel format wise when you dynamically load OpenGL. You still go through the same procedure”

Please outline ‘the same procedure’. You’re aware that you’d somehow have to connect the GL implementation to the windowing system and all that crap. Everything that MS’s OpenGL32.dll handles right now. It’s not just DLL-Loading. And, as evidenced by comments from NVIDIA employees, right here on this board, the ICD isn’t even designed to know about pfds.

The only exception is - tada - the 3dfx MiniGL because it just ignores the window system and pops up 16 bit fullscreen without bothering about anything. You don’t want to make that common behaviour.

Yet more fun, you might have multiple monitors driven by totally different cards in your machine. Choice of ICD would depend on the window position. If it overlaps both monitors, it would fall back to software, otherwise it would chose the card the monitor is plugged into. Application programmers are simply in no position to choose an implementation, that’s OS stuff.

I could pick up some more, but I’m not quite in the mood. In the end I think, if there were no OpenGL32.dll, as it is right now, someone would have to make one. I agree that the ‘support’ has been less than stellar, but it works, and every vendor making cards that run in Windows supports it.

I think a far better way of solving this all is just wrapping a few hundred wglGetProcAddress calls into a library, rather than trying to replace the ICD loading mechanism.