Rage - what a mess

So Rage, the new flagship title by ID, was released and found to be unbelievably buggy when played on PC. Both AMD and NVidia are scrambling to update their drivers so that the game becomes at least nominally playable.

While I have not yet seen any analysis of what went wrong, the fact that it can (and apparently must) be fixed on the driver level is surely bad news. Here is the first (and only?) major OpenGL title on the market, and it turns out to be hideously badly supported, with drivers that are apparently not tested at all. Clearly not good for AMD or NVidia (who promised OpenGL support but apparently cannot deliver), or for ID (who sees their flagship title and engine technology get some incredibly bad press) - but also bad news for OpenGL itself. Who in his right mind will do any kind of mass-market development in OpenGL now, knowing that for anything less high-profile than “the new ID title” you are NOT likely to see AMD and NVidia scramble to support you?

Which leads me to wonder: is there anybody else out there who is seriously pushing OpenGL (and not just sticking to the v1.0 profile!)? Are you running into the same problems with driver support apparently just not being there?

Also, what did ID use to develop this game? Do they have access to beta drivers that fix all their issues? How come such drivers are not available for general use now?

Questions, questions…

My interpretation is that Rage was polished for the consoles.
For the PC the development may well be done on bleeding-edge-unreleased-made-for-id-computers drivers from both nvidia and ati, which should have been out before the game :slight_smile:
The QA testing for PC is indeed more involved than a console, but of course this is not an excuse to skip it.

While I have not yet seen any analysis of what went wrong

Wouldn’t it be reasonable to wait to know what exactly went wrong before making any analysis or judgment?

As I understand, most of the performance issues around Rage center on “Megatexture”, which is generally not a good idea in any environment where you don’t have direct access to the hardware.

In any case, there have been plenty of major releases of games, OpenGL and non-OpenGL, that have prompted scrambles by driver writers to get new versions out. BF3 did it, as did CivV. Indeed, I recall that there was some driver issue around StarCraft II for NVIDIA last year or something. I don’t see this as being anything more or less than those, and all of them were D3D games.

ID is also unfortunately in a market position where they’re never wrong. If their game isn’t working on a system, then it’s clearly Microsoft/NVIDIA/AMD’s fault, not ID. So even if they’re effectively abusing the API or doing non-standard (or just il-advised) things, driver writers will conform their drivers to them. Not vice-versa, as with other game developers.

If anything, I’d say that the biggest problem is that it takes away from driver development time that would otherwise be spent on fixing real driver bugs. But there’s not much that can be done about that.

Who in his right mind will do any kind of mass-market development in OpenGL now

Only expensive “legacy” software. Indeed some giants are switching gears now heading to the “right” direction.

So the ones who are to blame are Id, NVIDIA, AMD, and/or etc. etc.

OpenGL needs one and only thing. Something to blame. OpenGL simply needs a real SDK, not a collection of wrappers and image loading libraries…

Create a downloadable SDK for each platform that communicate with a minimal unified driver architecture and then we will have one thing to blame, the SDK. But guess what, if this ever happens then there will be none to blame. :slight_smile:

Implementing a bug-free OpenGL requires full dedication from a big developer, not a team working on it as a secondary project. :wink:

Good luck!

That, ‘embedded’ guys (sorta kinda) or those targeting OSX.

OP speaks the truth. OpenGL drivers on windows are just a mess. Certain vendors do better than others, but in my experience its been a total crap shoot as soon as you try to use any even remotely new feature. The fact that Rage has had big problems did not surprise me in the slightest.

We’ve had an OpenGL app in deployment for just over a year now, an art tool. It is basically written against OpenGL 2 with a few common extensions. Nothing super fancy. We have constant problems, it’s a moving target. We require all our users to be up to date with video drivers, which helps most of the time but every once in a while a vendor will push out an update that breaks our app. GLSL compilation has been one of the main problems. sRGB color space another. On and on it goes…

I like OpenGL, I want it to stay. I’m optimistic about its future. But it needs a reset. Come up with an OpenGL 5 spec or something that totally breaks compatibility with the old version. Just start over. Please. When I hear vendors say things like “oh, we shouldn’t deprecate features, we can continue to support the old ones just fine…” I look at the state of OpenGL drivers today and ask “O RLY?”.

A functioning conformance test suite would be far more useful to this end than a backwards-incompatible API. And supposedly the ARB is working on getting one for 3.3. I say “supposedly” because it’s supposed to have already come out.

Conformance tests would be great! I hadn’t heard about that. It will be very hard to get good test coverage, but it’s better than nothing. Hopefully one will come out for GL 4.x rather than 3.3, and it will stay up to date.

There are a lot of other reasons I’d like to see OpenGL start over, driver quality is only one of them. Let’s do both! Driver quality is a serious problem, more than just a road bump. We’re probably going to experiment with a direct3d 11 build for future revisions of our app, if for no other reason than maintainability.

OK, let’s assume that the ARB came up with an “OpenGL 5.0” that was not backwards compatible with previous versions. And that it came out tomorrow.

Who would support it? No one. A specification is just a piece of paper; until it is implemented, the specification means nothing.

So you’ll have to wait 6 months (minimum) for it to be implemented. And since that’s just a first pass at implementing it, it will be incredibly slow. Also, it will be buggy.

Since it will more than likely include some form of high-level shading language, it will also include a compiler for said language. So that compiler will be buggy too. You could make the language GLSL, but then you haven’t improved anything. Even if you only use a lower-level shading language, that’s still a completely new compiler that they need to write code for. Code that will initially be buggy.

It will take a good year or two to work out the bugs. The big ones will be fixed right away, of course. But the little ones, the outside cases, like the ones you hit in OpenGL now, those are the ones that it will take time to ferret out. Sure, a simpler spec will be simpler to implement, but since the implementation must be written from “scratch” (not the low-level parts, but the interfaces to them. Where 99% of all OpenGL bugs are), this will introduce a lot of bugs.

So you’re looking at two years minimum before this fixes anything. And even that assumes that the IHVs drop all pre-5.x GL development and devote all of their GL resources to 5.0. That would of course be incredibly stupid, since there are still many applications that use OpenGL out there.

So you’re effectively telling IHVs to support both the old API and the new one. The same IHVs that you say are not supporting the old API very well. Do you honestly think that giving them more work will help improve their drivers?

Equally importantly, why would they do it? Zero code out there would support GL 5.0. So why would an IHV invest real programmer time to support it? There’d be no real point; nobody uses it. Nobody would use it until it’s implemented. And since nobody implements it, nobody would use it.

See the problem? It would solve nothing.

Back on RAGE itself : just started playing it on my 2 years old PC (geforce GTX 275), absolutely no problem, 60fps framerate, way quicker than Fallout 3 for example.
The great part about PC is choice of components. The bad part about PC is the variety of hardware.

I’m assuming you are talking about nVidia.

That won’t happen because nvidia doesn’t want to break compatibility. It was a big stink back in 2008 when GL 3 was finally announced, it wasn’t at all what we expected. It was just GL 2.2.

The guys at nvidia are very confident that they can provide high quality drivers with backwards compatibility. They also don’t like GL 3 deprecating things that exist in their hw.

And apparently, their are some big software vendors who don’t like deprecation either.

You’re quite right that it would take a while before something like “OpenGL 5” would be implemented from spec. And it would take a while longer yet before it was high quality software, relatively free of bugs. But at least we would get there. I would be happy to go through such a transition.

All the objections you’ve raised against an “OpenGL 5”, direct3d has met and overcome.

First: GL 5 would be supported for the same reason that developers would use it: because it would be modern, lightweight, and better.

Second: It would not double the work of driver authors because old versions of OpenGL would no longer be under active development. New features for those versions would stop coming in and they could be frozen for the most part. D3D has done exactly this.

Third: As for shader languages like GLSL, they should be decoupled from the driver. ARB could create an intermediate byte code and supply a compiler front end for separate use by users. This allows for code obfuscation, offline compiling, actually unified syntax etc. as well. Again, like D3D.

Some developers have a great aversion to breaking backwards compatibility. I guess I don’t. The world of software moves quickly and thrives on change. We’d be fine.

PS - As for Rage it does run pretty well now. I’ve been enjoying it :slight_smile:

All the objections you’ve raised against an “OpenGL 5”, direct3d has met and overcome.

Yes: because Microsoft implements half of it.

But also because D3D code gets tested 20x more often than OpenGL code. You can’t find and repair bugs in a system that’s never used; it has to be tested in order to be fixed.

Take that silly ARB_sampler_objects bug in ATI drivers. It took over nine months for them to even find out it existed. And it’s taken them that much time to fix it properly. That’s 18 months that a basic 3.3 feature has been non-functional.

Why? Because nobody used it! If there were 30 games released in the 5 months after 3.3 hit that used it, you can bet that they would have hit the bug. Those developers would have informed ATI of the bug and the exact circumstances of it. And the fact that these were actual professional game developers (instead of hobbyists on a forum) would mean that the problem would be urgent. Possibly to the level of issuing a driver out of cycle. But at the very least, it would be fixed in the next monthly release.

The problem with OpenGL is that its most recent iterations go unused. If it’s not used, then it’s not being tested. And if it’s not tested, then it’s not debugged. And if it’s not debugged, then it’s buggy.

Your GL 5 does nothing for this fundamental problem.

GL 5 would be supported for the same reason that developers would use it: because it would be modern, lightweight, and better.

Better than what? GL 4 would do everything that 5 does. GL 4 implementations exist and are relatively mature; GL 5 implementations don’t exist and wouldn’t be mature for years.

It would not double the work of driver authors because old versions of OpenGL would no longer be under active development. New features for those versions would stop coming in and they could be frozen for the most part.

Do you really believe that this is what most of the time that IHV’s spend on OpenGL are doing? Implementing new extensions/features?

No, most of their time is spent on bug fixing, supporting new hardware, and performance optimizations. As well as having to pretend that up is down whenever Rage or some newfangled game comes out that pretends the API does whatever it says it does.

NVIDIA spent a hell of a time working on NV_path_rendering. And they’d have to re-implement it for GL 5.

As for shader languages like GLSL, they should be decoupled from the driver. ARB could create an intermediate byte code and supply a compiler front end for separate use by users. This allows for code obfuscation, offline compiling, actually unified syntax etc. as well.

And who’s going to write that intermediate compiler? Unlike Microsoft, the ARB is a volunteer organization; they don’t actually have resources. All they produce is paper. They have to contract out everything; even the conformance test they finally got around to making is being contracted out.

Are they going to contract out maintained for it too? With who? For how long?

Also, this doesn’t guarantee squat as far as compiler functionality. Sure, you won’t have basic front-end compiler bugs. But you still have all the bugs in the optimizers and other general stupidity. Looking at most of the GLSL bugs in the driver forum, maybe 20% of them are purely front-end compiler bugs.

The best you could say is that you wouldn’t get NVIDIA’s nonsense of using their Cg compiler for GLSL.

Oh, and you’d have to lose extensions too. At least in the high-level language. So the only way to use extensions is to use the “obfuscated” low level language.

And what does “actually unified syntax” even mean?

The world of software moves quickly and thrives on change. We’d be fine.

Who is “we”?

That I find at quite understandable. If Cg is a functional superset of GLSL (and I have no idea if that’s the case) then developing a new compiler for GLSL can’t be in the best interest of NVIDIA. In theory, if you can do a 1:1 transform of GLSL to Cg code, all you need to do is to alter the compiler front-end to include some transformation stage. Then pass the transformed source to the lexer, parser and so on. If I was to support GLSL and had a working compiler already, I’d attempt that too.

On the issue of OpenGL driver quality: As long as there aren’t real incentives for top-quality GL implementations, IHVs aren’t going to be as fast when it comes to bug fixing and improvements. Furthermore, they’re not going to do anything if there aren’t any economic incentives. Take Intel for instance. Years ago, they didn’t give damn about OpenGL on Linux. Now, they pulled the first ever MESA driver into the kernel which reports GLSL 1.30. I think you can call the big ones many things, but they most definitely aren’t charity organizations who push OpenGL forward and implemenet the specs simply because they’re so generous.

Everytime when reading discussions like this one, I can’t help but feel that some people disconnect software, especially free (not necessarily open-source) software and business. It’s seems like walking into a shop and taking stuff for free while expecting the shop owner to do his best to keep the shelves full.

This is a very good point. OpenGL sees a lot less use these days, and so gets inferior test coverage. This is probably a more substantial effect than the complexity of the API in terms of driver quality. I still think a GL rewrite would improve driver quality in the long run, but again that’s not the primary motivation in my mind for creating such an API.

The real question that needs to be asked is why is OpenGL seeing less use than Direct3D?

For years and years, Direct3D sucked. OpenGL was much better. SGI did a good job of mapping it to the hardware with version 1.0 and it showed. So Microsoft kept reinventing the wheel. They would start over, making big changes that broke code that used old versions. By the time they got to Direct3D 9, they were ahead, both in terms of features and user base. All of this despite the notable handicap of not being cross platform - an advantage OpenGL has always retained.

Can you imagine what D3D11 would look like today if it contained every single interface that Direct3D has ever used? This is the reward of rewrites.

It’s been mentioned that Microsoft implements the front end of D3D, and this is true. It’s a good thing. It benefits driver authors in a big way to simplify implementation, and benefits users of the API by enforcing a unified interface that isn’t just a collection of extensions that have come onboard at varying times with varying support.

You mention the ARB just kicks out specs and has no resources to implement. Well, get some resources. Start an open source project with volunteers if money is the issue. Do something because the current strategy is clearly not working.

What? No. Maybe you misunderstand. Take the next batch of features that GL 4 doesn’t support yet and roll them into “GL 5”. The advantage of taking a big non-backwards-compat leap forward is that you can include features that are new today as basic functionality. That is, the API design can actually reflect its usage on hardware. You can support older systems through a query/feature level system if you like (again, D3D does this).

I realize I sound like a D3D fanboy here. I guess I sort of am, though I don’t like to admit it. I grew up on OpenGL, it’s the reason I taught myself C so many years ago and probably the reason I went into programming as a profession. I’ve only recently started using Direct3D 9 and 11, but the difference (particularly for d3d11) is night and day compared to OpenGL. I wondered for a long time why people used D3D and not GL, but not anymore. If anyone reading this has not tried out D3D11, take a look. Read the docs, and write a simple app in it. When you come back to OpenGL it will be with a different perspective.

GL has some lessons to learn from the competition. It will ignore them at its peril.

Oh, I guess I should add that I consider OpenGL ES / WebGL to be a noble experiment. Even though it’s really just more of a subset of features from OpenGL and not really reinventing the API, I feel that even this small departure in spirit represents some progress in thinking here.

For years and years, Direct3D sucked. OpenGL was much better. SGI did a good job of mapping it to the hardware with version 1.0 and it showed.

And yet, for all of those years that D3D sucked, D3D was also being used. D3D v3.0 was utter garbage, yet it was also used. D3D v5.0 was minimally decent, yet it was used. D3D 6.0, 7.0 were better but still kinda crappy. Yet they were still used.

Why? Because it was Microsoft. Because they had the resources behind it. Because however terrible the API was, it actually worked.

Game developers will complain about an API, they will hem and haw, they will hold forth at length. But at the end of the day, what they care about is getting it done. And if a crappy API gets the job done, then they will use a crappy API to do that job.

The secret to D3D’s success is not that it was constantly reinventing itself. The secret to its success is that it was more stable and reliable than OpenGL. It always has been.

And that is due primarily to its driver model.

It’s been mentioned that Microsoft implements the front end of D3D, and this is true. It’s a good thing.

Yes it is. This model is how D3D retains backwards compatibility: because Microsoft implements a conversion layer for older D3D versions to talk to new D3D version drivers. Without this model, you could not effectively change the API every few years and retain reasonable drivers.

Of course, it’s also not a model you can use for OpenGL, because OpenGL is cross-platform. You can’t do this kind of abstraction cross-platform.

And also because someone would have to write and maintain it.

You mention the ARB just kicks out specs and has no resources to implement. Well, get some resources. Start an open source project with volunteers if money is the issue. Do something because the current strategy is clearly not working.

Resources do not appear ex nihilo; they require lots of money. And Khronos is not exactly rolling around in cash.

And quite frankly, I wouldn’t trust an open source project with something like this for multiple platforms. They’ve had hardware specifications for various hardware for a couple of years now, and their GL drivers are still inferior to IHVs. Even ATI’s. So their track record on this point isn’t exactly good.

It will ignore them at its peril.

And that peril is… what exactly? That OpenGL will be marginally used, particularly in high-end games? We’re already there. That OpenGL is principally used for its only real strength: cross-platform development? Again, that’s a bridge we’ve already crossed.

There’s no further peril out there. OpenGL will survive just fine on being the only cross-platform alternative.

Also, need I remind you that the ARB has tried twice to rewrite the API, and both times they abandoned it in favor of keeping what they had?

And the second time, they squandered a golden opportunity to make up some ground over D3D, because it was during the D3D10 transition. D3D10 is locked to Vista, but because Vista underperformed, game developers were stuck with D3D9, even though a lot of D3D10 hardware was sold. If the ARB hadn’t been trying to reinvent their API for two years, if GL 3.3 had been out 2-3 years earlier, it would have gone over much bigger with game developers.

But by 2010, Vista adoption was up, Win7 was out and selling well, and cross-platform game developers were stuck with D3D9-level tech for consoles.

Take the next batch of features that GL 4 doesn’t support yet and roll them into “GL 5”.

There are no more “features” for 4.x level hardware. Or at least, not any significant ones. Just look at 4.2; most of the stuff there is API cleanup: texture_storage, shader_language_420pack, etc. Indeed, the biggest “features” of 4.1 were separate_shader_objects and get_program_binary, which could have been implemented back in 2.0 (and NVIDIA even implements them in 2.1-level hardware).

Notice how Microsoft only does API rewrites when new hardware comes out. There’s a reason for that.

Heh. Well I guess by “peril” i meant “things will stay as they are”. If you feel that the current status of OpenGL adoption is acceptable, then more power to you.

Also, amen to missing out on an opportunity during the d3d9 -> d3d10 transition.

Well, true. But again, we’re talking about a future API, not a present one. OpenGL could take the opportunity to take the lead for once and try to define what “GL 5” hardware would look like. You can bet Microsoft will (again) if the ARB doesn’t. Such an initiative would mesh well with an API redesign.

I have two questions now.

First: when the ARB tried (twice) to redesign the API, and failed, why did they fail? I’ve been hazy on the details there. I would be very depressed to learn that it was simply because people on the board didn’t want to break compatibility.

Second: To Alfonse, and anyone else reading along - if a major API redesign is not the way to go, then what is? Or are we already on the track for success here? I think I’ve already made my own thoughts clear on this matter.

Just out of interest, do you remember on what grounds? Did they give any official statements?

I never understood why they didn’t make a clean cut with 3.0. Honestly, who ports their entire vintage 1.5/2.0/2.1 codebase to 3.0+?

If you feel that the current status of OpenGL adoption is acceptable, then more power to you.

Define “acceptable”? I accept the fact that the current status exists. I accept the fact that the current status is unlikely to change appreciably in the near future.

Anything else is wishful thinking.

What I feel are two things:

1: Redoing the API alone will not help. And most of the things you would have to add that would help would help just as much without an API redo.

2: The ARB does not have the resources to do most of the things that would help.

Therefore, I “accept” that OpenGL’s current status is what we can expect from it for the foreseeable future. And that’s only if ARM-based Windows doesn’t become popular; if it does, you can expect OpenGL to be banished entirely from the Windows ecosystem. ARM-based Windows programs will not have access to OpenGL at all, because you can’t access OpenGL via WinRT.

First: when the ARB tried (twice) to redesign the API, and failed, why did they fail?

The ARB’s internal discussions aren’t exactly available for comment. These days, the only real interaction we get is at GDC and Siggraph, where they present some things and maybe drop some specifications on us. But here’s an outsider’s perspective.

The 3D Labs “OpenGL 2.0” effort didn’t really seem (again, from an outsider’s perspective) to get much attention. They made a presentation at GDC, but it was clear that some of their stuff was kinda pie-in-the-sky. Their equivalent to FBOs and texture specification was… pretty much unimplementable. Indeed, most of the problems with GLSL can be traced directly back to 3D Labs being way too forward thinking.

The Longs Peak effort, which was ostensibly led by NVIDIA and ATI, seemed more likely to succeed. There are a few older threads from 3.0’s release, where members of the ARB tried to console the understandably peeved users about the failure. Here’s a post from Barthold Lichtenbelt, chair of the ARB at the time (and still is, I think), that explains his explanation of things.

Reading the entire thread can be interesting if you like to see naked anger, bitterness, and desperate attempts at damage control from certain ARB members :wink: If you want to know how much my perspective has changed since those days, I used the username “Korval” back then.

Or are we already on the track for success here?

We are on track for… treading water. That’s where the ARB has been since GL 3.0: treading water. Not sinking. Not swimming. Simply staying afloat.

A conformance test is probably the best improvement we’re going to get.