OpenGL 2.0 news

Originally posted by crystall:
That’s what OpenGL 2.0 “pure” was for. The original propositions suggested a full-fledged version which supported both 1.x and 2.0 functionality and a slimmed down version with only 2.0 functionality. That was supposed to make the transition from legacy code to OpenGL 2.0 code smooth.
Crystall,

I understood what the intent of OpenGL 2.0 “pure” was. I just think it was a bad idea.

There is an immense volume of OpenGL code out there, and it’s unreasonable to expect it to all get transitioned. There’s no need to rewrite perfectly good code either, just because someone thinks that some new API style is better.

I’m sure I sound like an old kodger, but this strategy has been very successful for OpenGL at a pretty minimal long-term cost.

Fully refactoring a solid, working API just doesn’t help anything. What if people did this with the C stdlib?

Thanks -
Cass

Originally posted by cass:
[b]I’m sure I sound like an old kodger, but this strategy has been very successful for OpenGL at a pretty minimal long-term cost.

Fully refactoring a solid, working API just doesn’t help anything. What if people did this with the C stdlib?

Thanks -
Cass[/b]
Just curious, but as an “old kodger” and OpenGL veteran, what do you think of the current state of the API? Do you think that the current (1.5) spec is at all cluttered or beginning to lose any of the elegence that OpenGL might have had?

From what I’ve seen the complexity difference between OpenGL and D3D drivers is immense. Some of that advantage is because Microsoft handles a layer between the IHV drivers and the applications. That Microsoft layer doesn’t account for everything though because Apple handles a similar layer on the Macs, and the Macs OpenGL drivers, while simpler than the Win OpenGL drivers, are still much more complex than the D3D drivers I’ve seen. I believe the main reason for all this added complexity is OpenGL keeps all the legacy extensions around. I do see the advantage that Cass points out, but I believe eventually all this old baggage that is being carried will stifle progress. Also all this complexity adds barriers of entry for new IHVs.

To explain how much more complicated OpenGL is than D3D lets look at drawing a triangle. D3D has one way to do this. OpenGL has immediate mode, display lists, vertex arrays, compiled arrays, VBO, VAR, and VAO (did I miss any?).

Cass also states that supporting the old paths is fairly easy as we go forward. From my point of view (I might be wrong here) that doesn’t seem to be the case. Either hardware needs to keep around legacy transistors to support the old fixed function states (like the nv30 evidently did) or code needs to be added to the driver to handle the old stuff (read fixed function shaders) (like the r300 evidently did). I can imagine programming new programmable hardware for things like all tex_env_combine and related extensions would not be a trivial amount of work.

I understand where keeping backward compatibility is important. I think the best solution would be to create a new “pure” version of OpenGL. Then a wrapper library could be added to that which would support all the old functionally. This would certainly introduce a performance hit for the legacy code, but I think it would lead to more stable core functionality. It would also lessen the barrier of entry for new drivers. I’ve noticed reviews on hardware sites of any card not from ATI or NVIDIA in the last 2 years has made note of how the OpenGL drivers were not as optimized and/or stable as the D3D drivers. Maybe creating a new “pure” core OpenGL spec and then just deriving old functionality from that would fix this.

That said I don’t think it will happen. Such a big change just doesn’t match the expectations I’ve created for the ARB over the past few years. Such a change also takes away some of the advantage NVIDIA and ATI have in the market today.

Originally posted by cass:
I understood what the intent of OpenGL 2.0 “pure” was. I just think it was a bad idea.
having an api wich now presents tons of different legacy ways on how to do something is a good idea then? it’s not about dropping support. it’s about dropping old PROGRAMMING INTERFACES.

There is an immense volume of OpenGL code out there, and it’s unreasonable to expect it to all get transitioned. There’s no need to rewrite perfectly good code either, just because someone thinks that some new API style is better.
and this old code would not be hurt at all. because you could still have the old opengl, and you still have an opengl.dll wich can handle it all.

I’m sure I sound like an old kodger, but this strategy has been very successful for OpenGL at a pretty minimal long-term cost.
sometimes a cleanup is a good thing. opengl is not perfect. i have to teach some people today that the array lock isn’t useful anymore to get performance… people find it because of legacy apps and try to use that stuff.

Fully refactoring a solid, working API just doesn’t help anything. What if people did this with the C stdlib?

uhm. anyone still uses that old piece of **** ?

c++ shows a great example: provide the old one for backward compatibility, and provide a new one, in a different namespace, in different headers, cleaned and split from the old. and thats what i’m talking about.

#include <gl2>

=> you can use glslang, vbo, pbo, rt, and all the fancy new stuff. but you get dropped the old 8bit stuff, a lot of the old buffer stuff, simply the way the old worked (because the new stuff built on top of it is sometimes rather ugly simply to fit in in some way).

and if you still need it #include <gl/gl.h> and done.[/QB][/QUOTE]

there’s currently nothing as slim, as straightforward, and as obvious on how to use, as the newest dx. and this will continue to do so.

you’re quite retarted if you don’t take the possibilities into account. yes, gl was great without big change over 10 years. but hw evolved never that fast, and gl never had to change that much of how it should be coded for.

Originally posted by Ostsol:
Just curious, but as an “old kodger” and OpenGL veteran, what do you think of the current state of the API? Do you think that the current (1.5) spec is at all cluttered or beginning to lose any of the elegence that OpenGL might have had?
Hi Ostol,

That’s a good question. Though I think you can apply some objective measures, this is often largely a question of personal preference.

People like Kurt Akeley are professional designers. They come up with designs that last a long time. I didn’t realize how much effort goes into this until I worked with Kurt. His process is very methodical and very thorough. He comes up with a really nice extension API that follows the conventions of the existing API, is simple but elegant, and minimal but sufficient. Then he challenges one of his base assumptions and reworks the design to see how it affects things.

I don’t have the patience to be a designer like Kurt, but it’s no accident that OpenGL is as elegant as it is. Even by subjective assessment, most developers think OpenGL is a clean API.

The OpenGL core, that is.

Here’s where I have my own personal bias. When adding functionality to the API as a vendor extension, or as an EXT, I think there’s a relatively low bar for aesthetics. Get something in there, do the best you can, and figure out how you really should have done it (or IF you really should have done it).

When something reaches ARB level, there should be hand-wringing about the style of the API and whether it is consistent with OpenGL convention, whether it is elegant, simple, minimal, intuitive.

As a simple example, I was against the GLhandle model for objects defined by the new GLSL extensions. The reason is not that the handle approach was inherently bad, it was that it is inconsistent with every other form of object in OpenGL and that makes it less intuitive to OpenGL developers.

So to get back to your question, I am very happy with the OpenGL core as it stands today. It has some quirks, but nothing hideous. It’s got some unpleasant extension APIs - some of which came from NVIDIA - that served (and still serve) their purpose but will never be part of the core.

The thing I most worry about with OpenGL is that in our rush to add new features to the core, we don’t go through the learning process of having vendor extensions and we don’t take the experience from those extensions and design core revisions that are as tight and consistent and elegant as they could be. This is a big danger, I think.

If you throw a bunch of junk in the trunk that was untested - by time, I mean - or an otherwise bad idea (but you didn’t know it at the time) it won’t matter. OpenGL has been the “ansi C” of real-time graphics APIs because it had a solid core.

I feel like OpenGL can move forward indefinitely without casting off any functionality from OpenGL 1.0.

Much of the future of graphics programming will revolve around language design. For better or worse, I (personally) think that belongs in the software world, where software vendors can be responsive to the needs of software developers.

OpenGL has this funny - and unfortunate - model now, where if functionality is not provided via an OpenGL implementation (driver), then it’s not provided at all. There’s no OpenGL equivalent to D3DX, and we should really ask ourselves as OpenGL developers, why not?

I actually think the lack of a standard suite of OpenGL software utility libraries that work with any compliant implementation of OpenGL is the biggest issue facing OpenGL developers today.

Well - there’s lots of rambling. These are my personal thoughts - not those of NVIDIA. They’re not intended to offend anyone. And of course, you’re free to disagree. It’s a free internet. :slight_smile:

Thanks -
Cass

I see the problem with OpenGL right now, backward compatibility and so on. But I think that if OpenGL will move on in this way it will ran into problems. There are about 350 different extension in the registry. A driver supports about 70 (is an estimation, correct me pls if that’s wrong). 70 extensions… Where a lot of them effects each other. The main problem I think is that you have to take care of all these extensions and avoid collisions. If OpenGL/IVHs will move further on adding extensions to their drivers I think the whole thing would get really confusing.

I also think that the best way in solving this is to design a new API and use it parallel to the OpenGL 1.x as davepermen said. Defining a new core from the scratch surely won’t be done in a few weeks. But I thinkt the ARB really should think about defining a new core.

Originally posted by Corrail:
[b]I see the problem with OpenGL right now, backward compatibility and so on. But I think that if OpenGL will move on in this way it will ran into problems. There are about 350 different extension in the registry. A driver supports about 70 (is an estimation, correct me pls if that’s wrong). 70 extensions… Where a lot of them effects each other. The main problem I think is that you have to take care of all these extensions and avoid collisions. If OpenGL/IVHs will move further on adding extensions to their drivers I think the whole thing would get really confusing.

I also think that the best way in solving this is to design a new API and use it parallel to the OpenGL 1.x as davepermen said. Defining a new core from the scratch surely won’t be done in a few weeks. But I thinkt the ARB really should think about defining a new core.[/b]
Confusing to whom? OpenGL is really not confusing. Perhaps you could argue that extension specs are, but that’s more because they’re written as mods to a base or extensions to an extension. The ideas they describe are usually quite simple.

The extensions I dislike most are the ones that are difficult to understand all by themselves. It is very rare that an extension has some strange interaction with the OpenGL core that I don’t understand. The texture_env_* extensions were kindof that way.

Anyway, it’s important to state the problem that you’re trying to solve. If it’s to make drivers simpler, I’d argue that that’s not a problem. Most of the numerous extensions supported are like GL_EXT_texture_object, which is an extension that a program written against OpenGL 1.0 over 10 years ago would use.

Current NVIDIA drivers export over 100 extensions, but many of these are extensions that were rolled into the core so supporting them is free.

I guess the question is, what do you object to? That there are features in the API that you don’t want to use (but don’t cost anything being there)? Or do you want to change the API fundamentally? My feeling is that if it’s the latter, then you should define a new API. Don’t bother calling it OpenGL, because it won’t be OpenGL anymore. A new API should fully start from scratch - and there’s value to that. After all, that’s what Microsoft did with Direct3D.

Thanks -
Cass

That there are features in the API that you don’t want to use (but don’t cost anything being there)?
dunno, but having drivers that happen to take seconds to boot opengl the first time don’t tell me it’s for free and doesn’t cost.

there is tons of legacy in opengl now, and this should get cleaned up. there is no need to call it a different way, because it would still be an open graphics library, and still try to use the best of opengl itself. but there is a lot of ‘dead code’, stuff that no one uses that way anymore.

but it’s fine. you like it that way… i can accept it. i don’t really will understand it. there is no reason. i don’t want to see your home, dude… “i haven’t had to clean it up the last 10 years. i don’t need most of this anymore, but as long as it doesn’t get into MY way, who cares?”

I think the problem won’t be between extensions and the OpenGL core. I’m personally afraid of a driver which provides hundreds of extensions and I’ve to take care of inteaction between all these.

and i don’t like to use devil to load images because all i want is a simple load() wich loads jpg, tga, png, and bmp, and a save(), and have to use an api wich has several tens of functions, and ways how to call them, and has dll’s that in the end fill about half to one mb of my downloadable zipfile.

i prefer to get an api that evolves, and, from time to time, defines a milestone, at wich you get a full compilation that never gets touched again, and builds enough to provide support for all legacy apps. and then, you start new, with a clean api again, and learn what was really used, and in what way, the time before, and only provide that, and restart extending.

A redesigned simpler and elegant API without the current lot of extensions would be easy for new programmers to adopt and much simpler to program. And also as pointed out, it would also be easier for IHVs to develop good drivers.

Shouldn’t a new API be designed (if not now atleast after a few more generations) beacuse it looks like the API will have to expose lots of functionality like the primitive processors and if the new ATI slides are to be believed then you can have a vertex shader both before and after a primitive shader. How can such things be elegantly exposed with the current API?
Support for the current features should be in the form of a wrapper and use of the old API must be discouraged.

I think duplicated extensions should be taken out of ihvs gl spec docs to reduce confusion. Make a separate docs for the old duplicated extensions.

Originally posted by davepermen:

but it’s fine. you like it that way… i can accept it. i don’t really will understand it. there is no reason. i don’t want to see your home, dude… “i haven’t had to clean it up the last 10 years. i don’t need most of this anymore, but as long as it doesn’t get into MY way, who cares?”

There is no need to be condescending.

There are functions in the API that aren’t used in game dev world that are used in the CAD world extensively. Just because it is not useful to you, doesn’t mean others don’t find the functionality useful.

I could go for a more formal deprecation protocol for extensions. It could be as simple as an ARB vote followed by moving the extension to a deprecated list in the extension registry.

A new API should fully start from scratch - and there’s value to that. After all, that’s what Microsoft did with Direct3D.
It’s not first time I see that argument (scare us with D3D example), which is not fair. You are comparing 2 extremes.

DirectX underwent utterly insane, cyclical process of rewriting from scratch nearly every year, involving sort of regular reinventing the wheel, which always ended up angular anyway.

OpenGL, on the other hand, has just been denyed the FIRST major update after 12 years of existence. After it has managed to gain some very real fat.

That’s the difference: doing rewrite every year is something drastically different from doing it once per 10 years.

Sure, stdlib is even older than OGL. However, as your company likes to remind us occasionally, the speed of progress in graphics hardware is extraordinary… :cool:

I’m relatively young with OpenGL so I don’t know, but it seems to me that the IrisGL->OpenGL could be considered as a major API update (aside of the ‘openness’ and ‘.org’ stuff). It must have been much more radical than OpenGL 2.0, which redesigns only small part of GL 1.x and retains full backward compatiblity with GL 1.x.

Another example: OpenGL ES. It really does drop some features from the core (again: GL 2.0 does not), but yet it retains the spirit of OpenGL, so there’s no need to invent a new name for it.

As a simple example, I was against the GLhandle model for objects defined by the new GLSL extensions. The reason is not that the handle approach was inherently bad, it was that it is inconsistent with every other form of object in OpenGL and that makes it less intuitive to OpenGL developers.
Introducing GLhandle and the new object stuff was making sense ONLY if you did it once for all types of objects in GL : shaders, textures, images, arrays, buffers, framebuffers, etc. OpenGL 2.0 was excellent opportunity to provide completely new stuff (like GLSL) and at the same time rethink old few things (which did need that), all in one consistent way.

Since ARB rejected it as a whole, I see introducing GLhandle as completely pointless.

And the last thing:

Regarding recent “awakening” of ARB and noticing the “slow atrophy of OpenGL developers/apps to DX”. I’d like to say that killing OpenGl 2.0 was flushing down the toilet the thing that could be an opportunity to regain some of the lost ground.

There’s a pretty clear distinction between what ought to be in a scene graph and what ought to be elsewhere. Clearly, object-level concepts live outside of OpenGL. And various other things.

This is all well, good, and reasonable. However, OpenGL has evolved into a state where a good portion of its functionality can be implemented in itself.

Take classic vertex arrays, for example. I’m pretty sure that most drivers, on glDraw* calls, simply copy out the data into VBO buffers (or their low-level equivalent), doing appropriate data conversion as needed for fast performance. Well, we can do most of that. As such, it shouldn’t be in OpenGL. The same goes for immediate mode.

Display Lists don’t fit this description because of performance loss. A layer on top of OpenGL that reads and stores GL commands isn’t anywhere near as fast as a hardware-based solution defined by the driver.

Given these two examples, one would say that a clean OpenGL is one that includes only features that cannot be implemented in itself, or those where doing so would represent a significant performance burden upon the functionality. This calls for an explicit dual-layered architecture, D3DX-style. That is, you have the low-level API which consists of clean functionality as defined above. Then, you have a glu-esque set of functions that implement the rest of the stuff as needed.

The current problem is that there is no explicit division at present. This creates confusion on the developer’s part as to which API to use.

Regarding recent “awakening” of ARB and noticing the “slow atrophy of OpenGL developers/apps to DX”. I’d like to say that killing OpenGl 2.0 was flushing down the toilet the thing that could be an opportunity to regain some of the lost ground.
I wouldn’t go that far. Having a clean API is nice, but a necessary first step to any major improvements is either ending the reliance on OpenGL32.dll (and therefore having core funcitonality without getting function pointers) or having an ARB-approved & maintained extension loading library freely avaliable for all purposes. Either of these would be a perfectly fine way to take a gigantic step forward towards an improved, cleaner OpenGL.

Even if the API never becomes clean (which, btw, only goes to prove that the ARB is ineffective at making big decisions, and can only handle incremental changes), there is still plenty of work that can be done to make OpenGL more compeditive with D3D. Having an ARB-approved and maintained SDK with tools and example code, for instance.

Originally posted by davepermen:
dunno, but having drivers that happen to take seconds to boot opengl the first time don’t tell me it’s for free and doesn’t cost.
That’s not a good measure of driver complexity, but let’s assume gl drivers take twice as long to write when written from scratch.

Howcome driver developers are not here complaining about this? Howcome I haven’t seen anything about this issue in the ARB meeting notes?

HANDLE vs ID :

The spec gives some hints about design dicisions. The case for app assigned ID versus GL assigned handle for the shading spec. Read it if you wish.

It’s not consistent with old GL, but ever since I started using GL, I wondered why the dicision was made to allow users to assign any ID we like to display lists and textures.

To sum up my point, newer extensions should be as good as possible.

[QUOTE]Originally posted by V-man:
[b]That’s not a good measure of driver complexity, but let’s assume gl drivers take twice as long to write when written from scratch.

Howcome driver developers are not here complaining about this? Howcome I haven’t seen anything about this issue in the ARB meeting notes?[/b]

I intentionally keep the meeting notes at a pretty high level of abstraction. There is well-grounded concern that people will misinterpret offhand individual comments in the minutes to represent corporate positions (trade journals are particularly prone to this, if they can find any way to interpret some such comment as a “war” or “chaos” or other headline-worthy terms, they will - which is why we started dropping a disclaimer on the minutes recently). The alternative would be to pass the minutes through editing by every member’s corporate legal department, by which time there would be nothing left, and it would take far longer than it does today to post them.

People in ARB discussions do often object to proposed new functionality on the grounds that it’s hard to implement in their drivers. Unfortunately, they are rarely able to provide details, for concern it will reveal secrets about their drivers and/or hardware. So such comments have to be taken on faith.

[b]HANDLE vs ID :

The spec gives some hints about design dicisions. The case for app assigned ID versus GL assigned handle for the shading spec. Read it if you wish.

It’s not consistent with old GL, but ever since I started using GL, I wondered why the dicision was made to allow users to assign any ID we like to display lists and textures.[/b]

This is useful when capturing OpenGL command streams for replay in debuggers, among other things.

Originally posted by Korval:
That is, you have the low-level API which consists of clean functionality as defined above. Then, you have a glu-esque set of functions that implement the rest of the stuff as needed.
I am doing something like that with my software implementation. I’ve got a minimal core which contains only fundamental functionality. There’s no transformations, only one way to send primitives, etc… The rest of the GL is implemented in a wrapper on top of that. Naturally the core must be flexible and completely programmable in order to support the rest.

After reading through all the above posts… I have just one thing to add. I would like to see some formal way for extensions to be depreciated, but not removed from the API.

Most new programmers learning the API get caught up in the pile of extensions and duplicate functionality which all appear to do similar things and aren’t sure which to use. I’m not asking for functionality to be removed here, just a depreciation tag to be attached to various extensions/functionality with a reference to the newer replacement functionality. Eventually in maybe another 10 or so years, the functionality might get removed…

One of the areas I think OpenGL is seriously dragging behind DX is the availability of tools/utilities. DX comes with a lot of tools to help get you up and writing your game really quickly, while in OpenGL you spend a lot time writing them yourself or researching 3rd party tools. Things like loading images, models, model exporters, GL extension libs, shader/effects framework, … I really wish there was some ARB sanctioned extension loader that everyone could use and standardize upon for example.

Hi Cass,

Originally posted by cass:

I understood what the intent of OpenGL 2.0 “pure” was. I just think it was a bad idea.

There is an immense volume of OpenGL code out there, and it’s unreasonable to expect it to all get transitioned. There’s no need to rewrite perfectly good code either, just because someone thinks that some new API style is better.

Maybe I simply misunderstood your statement Cass, but here’s what the idea behind “Pure” OpenGL 2.0 is.

The idea is to provide an API based on OpenGL 1.x that is allowed to break backwards compatibility. This would allow for a thorough cleanup, if desired. This pure API would be aimed at developers that are willing to start from scratch, for whatever reason. One of the obvious uses for such an API is the embedded market, where hardware and software (driver) complexity is a prime concern, and where 3D graphics intense applications just now start to emerge. You only get the chance to do such a cleanup maybe once in a decade or so, if you’re lucky.

At the same time a “Pure” OpenGL 2.0 would be provided we would provide the full OpenGL 2.0 API in all its glory, fully backwards compatible, to serve the needs of all the other developers out there.

Barthold