Talk about your applications.

I am working for small game development firm Mindware studios on currently undisclosed game for PC and X360. Unfortunately many from our past projects ended in prototype state because we were unable to find publisher for them ( some youtube videos).

My current top list:

  • Better predictability of driver actions. I do not like when the driver decides that now would be the good time to recompile shader for some values of float constants to gain few fps or do something like that. That really ruins the day (or at least beginning of it). API which would allow me to opt-out from such optimizations (where applicable), or at least say that they should be done at time I choose (without need to mess with simulating fake rendering in attempt to outsmart the driver), would be nice. The same goes for upload and management of various resources.

  • Development support for debugging stalls. If there is rendering stall (e.g. the driver is reorganizing something), i would like to get information about the reason. Chances are that I can do something to avoid it, if I have the information what happened instead of “this frame took 300ms”.

  • Reliable drivers with good memory management. If the management and amount of video memory is supposed to be transparent, the driver should be good at that. Failing FBO bind or longterm slowdown when too many textures were used in the past are not nice.

  • Ability to cache compiled shaders. I was already forced to revert to more passes (with associated performance costs) because shader compilation (and warm-up) time was becoming significant issue.

  • Support for shared uniforms or even better the EXT_bindable_uniform in core supported by all vendors.

All I’m working on right now is a generalized shader system that makes writing shaders and reusing shader code easier. Most of my work is in parsing, parse tree transformation, and making sure that the output is functional glslang. But I do intend to have an actual previewer for this at some point.

The main issue that OpenGL gets in the way of with this is what I’ve discussed here before: VAOs being bound to numbers, not names of actual attributes. It forces me to create a hash-table/mapping layer for names, which is not in keeping with a generalized system.

I don’t really need hardware features from OpenGL as an API. Mentally, I’m at that point that Carmack talked about, where hardware only matters in terms of performance and hardware limits (number of attributes, uniforms, size of programs, etc). Yes, I know there are and will be hardware features beyond this (geometry, tesselation, blend shaders, etc). But I simply don’t care anymore; hardware can do most everything I would ask of it. I feel that game graphics have hit the point of diminishing returns, and anything more than we have now is basically just making graphics cost more and more, rather than achieving something.

My priorities for the API are, in order:

1: That it works. That the ARB do whatever must be done to make OpenGL implementations as stable and ubiquitous as D3D ones. Longs Peak was supposed to be that, but, well, you know what happened.

2: Reasonable simplicity. That is, the API isn’t cluttered with a bunch of legacy crap that makes you jump through hoops to do “normal” things (the VAO issue, for example).

3: Breadth of use. DX10 hardware as the “floor” is not good enough; any hardware that has real vertex and fragment shaders should be supported by an API that has the above features.

4: Keeping the API reasonably performant. That is, the API doesn’t make you do heavy-weight operations for common tasks (constantly changing “constant” uniforms just because you need to use the same program for two different objects).

5: Exposing new hardware features.

The simple fact is that I understand that the ARB’s priorities are not these. #2 is basically a pipe-dream at this point; the best we’re likely to see is a “not obtuse” API in GL 3.3 or so. And #1 is probably even moreso; trusting ATi or Intel Win32 OpenGL drivers is, unless you’re developing on the WoW or Id-Tech engines, basically foolish. The ARB has formally rejected #3, as all API cleanup (removal of old crap) that can lead to more stable drivers and such is unavailable in DX9 hardware. They had the opportunity to fix these things (or make a good stab at fixing them), and they decided not to. Oh well.

So really, there’s simply not much the ARB can do from my perspective. They can spot-fix OpenGL, but they’ll never get a good API out of it.

Sorry that you feel that way Korval, if at some point you do have any specific suggestions for OpenGL beyond negativity and defeatism, the thread awaits your (concrete) contribution.

[snip - sorry, wrong thread!]

Though this isn’t constructive for this thread, i would like to say that Korval’s summary is exactly how i feel. OpenGL doesn’t need a lot of features, but a thorough API cleanup, both for simpler development and more reliable driver implementations.

Apart from that, raising some hardware limits (like # uniforms available) would be great.

After the GL3 disaster, what beyond “negativity and defeatism” do you expect from long-term gl users? No need to answer this one.

Jan.

I think Rob is just trying to make the most of a less than perfect situation, Jan.

“Let us part these dark clouds of discontentment, that we may bask in the light of the blessings that have been bestowed upon us. Let us reach out to our fellow GLian in peace and harmonious contemplation, that we may not perish at our own spleen. Let us forge new boundaries of our domain, that we may set ablaze the hallowed walls of our consternation; and in the sublime warmth of its glow may we lift our eyes to behold the fleeting twinkle of its heavens.”
– Anonymous OpenGL User

Naturally everyone is entitled to their opinions. The point of this thread is that it’s expressly for the people that feel differently from you, and for them to share specific issues they are running into with OpenGL in their apps.

I expect 3.1 to address a subset of specific issues/suggestions raised in this thread (and in other discussion areas), and I expect 3.2 to continue on that path. The key is that these are not going to be years apart, and that driver development is once again pipelined alongside spec evolution, as you can see that 3.0-drivers are shaping up even as the 3.1-spec starts to converge.

The status quo is that applications have a pretty clean evolution path under OpenGL up to 3.0, since no features of the API have yet been cut (and if/when that does happen, it will be under a context version greater than 3.0 which an application will have to explicitly request, so this does not represent a change that would break existing binaries).

At some point a 3.x spec and drivers will arrive that deletes some set of the deprecated features for new contexts. This will represent the first revision of OpenGL that necessitates significant source code changes (but again, only for apps that want to move beyond 3.0 to this 3.x version).

What happens after that will be interesting, i.e. how long will it take for say 80% of the application base using OpenGL to get onto 3.x. If many developers keep using the 3.0 set in order to have access to legacy features, then that means that those developers are deriving more benefit from the availability of those features than they would gain by migrating their code to 3.x. They may not be here on OpenGL.org banging the drum for a completely minimized API, but we know they exist.

Sorry that you feel that way Korval, if at some point you do have any specific suggestions for OpenGL beyond negativity and defeatism, the thread awaits your (concrete) contribution.

But I don’t need something specific from GL 3.1. Other people have mentioned the common pain points, and I started a thread in this forum about the only one that is unique to me.

What I need is for OpenGL to work. I posted my priorities because that is what tells you my API concerns.

My #1 priority is an API that works; first and foremost, I need to be able to trust my code to this thing. That’s what my application needs; to be connected to an API that I can reasonably trust my code to. Anything else is secondary. That doesn’t seem too much to ask for.

I want an API where I can use an nVidia card, and have some reasonable assurance that GL code I write on it will be functional on ATi hardware. Oh, I expect the occasional glitch or hardware difference (size of shader), and certainly it wouldn’t work for a full-fledged game. But I should not have to have a bunch of “#if ATi Driver Version X, do my HDR rendering this way” in my code.

Is the ARB going to be getting on that anytime soon? Because 3.1 isn’t that. 3.0 certainly isn’t. The ARB’s plan of snipping off a few things here, a few things there, is simply not going to get the job done in terms of getting good quality Windows implementations out there. The API as a whole needs to be simple to implement if one is reasonably going to have good quality implementations. Even if 3.1 removed 100% of the deprecated features (and things you’ve said on this thread suggest that I shouldn’t get my hopes up) of 3.0, it still wouldn’t be an easy to implement API.

Oh, I could (and have in other threads) list certain pain-points on the API. I even came up with a very concrete solution for a few of them. But the simple fact is that even if they were all written into 3.1 tomorrow, OpenGL would still not be a good API. It would merely be “less bad”. You still wouldn’t be able to trust an ATi implementation with your money. I noticed that, while WoW relies on an OpenGL implementation on Windows, you appear to have switched to D3D for StarCraft II and Diablo III. If Blizzard can’t trust OpenGL on Windows, why should anyone else?

That is what my application needs: an API that is dedicated to working.

In short, my post was a way of saying, “OpenGL is too broken from my perspective for ‘specific suggestions’ or ‘spot fixes’ anymore.” Again, I recognize that the ARB isn’t going to do what is necessary to provide what I need. That isn’t defeatism; that’s reality. The ARB’s priorities aren’t in line with that anymore.

“Let us part these dark clouds of discontentment, that we may bask in the light of the blessings that have been bestowed upon us. Let us reach out to our fellow GLian in peace and harmonious contemplation, that we may not perish at our own spleen. Let us forge new boundaries of our domain, that we may set ablaze the hallowed walls of our consternation; and in the sublime warmth of its glow may we lift our eyes to behold the fleeting twinkle of its heavens.”

Until those “new boundaries of our domain” include “substantive changes to the API that make it vastly easier to implement, such that a developer can actually trust Windows implementations again,” those are meaningless platitudes.

From the perspective of making a working API, the ARB turned along a path that does not lead there.

In my opinion, ARB can hardly address ATI’s issues. You can’t make the company reliable by introducing a brand new API and think that everything will change from day to day. The complete API rewrite wouldn’t have had to help here. I have no idea what’s behind ATI’s lack of need to make their GL implementation as good as nVidia’s (probably their unwillingness?), but i don’t believe that the way GL is designed is the only thing that stays in their way.

I do believe that the way OpenGL is designed make it less reliable because writing a drivers for OpenGL is really complicated.

If we had only the feature we need in OpenGL, it would be less code to make sure it’s reliable and less work for ATI.

I really would like to have some accurate clues on this. A good one could be: How many lines of code in ATI D3D9 drivers and how many lines of code in ATI D3D10 drivers.

From this, I think we can espect even more differences between OpenGL 3 and a cleaned OpenGL 3. Less code to debug is less debugging work.

But you are right, a new cleaned OpenGL drivers, won’t happen in a day but at least some day. Currently, lot of OpenGL programmer don’t believe it would happen any time. That’s a big difference.

Personnally, I don’t believe that we would have realiable drivers on Linux and for me it’s as sample as that: I don’t care about it. Time waste.

For my previous project (Vue7) I have a good example like this. I tried to make the OpenGL rendering to work on Intel chips … We complain about ATI drivers but I sorry, they reach awesomeness compare to Intel drivers. My boss wanted the software working on Intel chips. I tried, waste time and tall him those drivers are crap, I will waste my time. As my boss really wanted it to work I talled him “Ok” but it would look like [censored]. And we get a deal, I just manage to detect the chip is an Intel one and just fall to use OpenGL 1.1 for those.

Now the company is considering using Direct3D, which few year ago I would have thought, it’s crazy for this kind of software, now it seams it makes sense to me. D3D development is so much faster and D3D9 show that it can survive quite some years which make the development valuable enough.

My fear is that the lack of interest for OpenGL of the game industry is just an example of the lack of interest of the whole graphics industry but the development time cycle is so short in the game industry that we already see the result in this industry.

Summary: Greatly thin out OpenGL’s API to maximize the chances that all vendors can actually implement solid drivers.

In a perfect world, that makes perfect sense.

However faced with the reality that said vendor(s) will not allow the deprecation old GL code paths right now, said simplification will not occur, driver writer level-of-effort will not decrease but continue to increase due to new/modified APIs (more code paths), and that easier-for-the-driver-writer yields increased-quality argument just evaporated on us.

The ARB’s plan of snipping off a few things here, a few things there, is simply not going to get the job done in terms of getting good quality Windows implementations out there.

As I recall the ARB’s plan was a gut, but the vendor(s) threw their weight on this and said “Take a hike. We’re not doing that.”

If a spec lies in a forest but no-one implements it, does it even exist? Dunno, but it certainly would be stupid to write it.

From the perspective of making a working API, the ARB turned along a path that does not lead there.

Seems they had a gun at their head by the vendors on this one. Vendors are the ones that need convincing here, not the ARB.

Theoretical perfection rarely matches reality. Any engineer knows that. Time to just buck up and deal with it, and be thankful the MS/DX/Vista camp foobarred badly in parallel with the GL3 fiasco, so we haven’t lost that much ground. Both are examples of what happens when a large part of the user base is ignored in favor of one small faction. So let’s pick ourselves up, learn from it, and move on.

And let’s all keep our criticism constructive. Don’t just list problems, but suggest and encourage discussion of solutions (or at least a deeper understanding of the problems). Rob and Cass for instance are both very good at this and we’re lucky to have them.

@Groovounet: That stuff looks [censored] awesome!

OpenGL ES 1.1 AND OpenGL ES 2.0 implementation on PC. Honesly for what I have done at work, that would be just perfect. As much as extension possible. Not really to use them in selling product but to give some idea on what’s is going on and orientated software design on this way.

OpenGL ES on PC is insane? Not more than OpenGL ES on PS3 …

I think that both ATI and nVidia have OpenGL ES implementations, how close, would it be a long time of development?

For me, first of all, it’s all about trust and development efficiency. (This is just for my profesionnal side, my amator side except a lot more obviously! XD).

@Jan: thanks! :slight_smile:

Vendors are the ones that need convincing here, not the ARB.

Vendors of what? The whole Longs Peak thing was started as a joint venture between nVidia and ATi.

but suggest and encourage discussion of solutions

The problem is that there is no solution. There was only ever one solution: rebuild the API from scratch. They started to do it, then stopped.

Korval, as politely as I can put this, if you really want to draw focus to things that don’t encompass discussion of your GL applications or specific improvements to GL that would help you develop and improve said applications, can you do it in a different thread ? OK by you ?

this is quite similar in my application.

i am working on an out-of-core volume ray casting system for extremely large volumes (academic use and prototype for oil and gas applications). with respect to my comment to usable uniform buffers: there is an amount of around 15 uniforms that need to be set for the internal shaders. there are some overlapps between different shaders. looking at the current bindable uniform extension it seem ridiculous to create a buffer object for each uniform, so the solution is to group them to logical structs and store them in buffers. this seems the best solution, BUT looking at the extension spec there are informations about alignment or sizes of the types missing, which again introduces some uncertainty if the driver ‘gets’ what is going on in the buffer object or if there is some kind of marshalling going on which can hurt performance.

another thing that bothers me is the missing include mechanism in shader files. large parts of my shaders can be shared. there was talk about some kind of text buffers which can be referenced in the shaders. i use a custom include mechanism which stitches the shader strings together, but this clashes with certain things (like the #version 120 statement).

Korval, as politely as I can put this, if you really want to draw focus to things that don’t encompass discussion of your GL applications or specific improvements to GL that would help you develop and improve said applications, can you do it in a different thread ? OK by you ?

Very well.

First, there’s this, which was discussed extensively in that thread. It’s an annoyance for any kind of tool where the user has control over the attributes rather than some internal coding convention.

Second, there’s this: a conformance test. A real, comprehensive conformance test, one that can stop vendors from shipping drivers if they fail it. The reason (well, one of them) I’m writing a HLSL exporter to my shader tool is because I can’t guarantee that my glslang shaders will compile. I’ve had perfectly functioning shaders fail to compile, even on nVidia hardware, simply after a driver update. Not long shaders either; fairly trivial “multiply the texture color by a diffuse color and store into gl_FragColor” things.

I need to be able to compile my shaders on one implementation, and have them fail compilation on another implementation only if they exceed some hardware limit. A reasonable conformance test would do that.

One of the things we should get on the OpenGL wiki is a list of “where to go / what to do” when a developer runs into a bug. I can express the same desire for zero bugs in drivers as anyone else - but closing the loop so that the vendors can move closer to that goal really necessitates filing bugs on a regular basis. (One might think that posting about them in a public forum might be sufficient but no).

A simple thing we could do is just put bug reports in the wiki (along with repro code if appropriate). Or make a Bugzilla DB ?

Brilliant idea, especially for small-timers like myself, who don’t have a direct line through registered developer programs and whatnot.

For a lot of folks the public forum is the only recourse; and inappropriate though it may be as a venue for filing bug reports, I think there’s some virtue in keeping things as public and transparent as possible (to the possible chagrin of the vendors concerned).

You mean an IHV independent bug tracker for OpenGL? Where I could file bugs for specific driver/hardware settings? I’m all for it, but haha who exactly will maintain it?

Reporting bugs is not the problem. This thing will fill up with hundreds of reports within days. But who will open, fix and finally close them, huh? Note: this involves fixing bugs! Yeah, thats truly a brilliant idea!!

(The message well I hear, my faith alone is weak.)

CatDog