OpenGL 2.0 news

Originally posted by barthold:
The idea is to provide an API based on OpenGL 1.x that is allowed to break backwards compatibility. This would allow for a thorough cleanup, if desired. This pure API would be aimed at developers that are willing to start from scratch, for whatever reason. One of the obvious uses for such an API is the embedded market…
Well, we have already created an API for the embedded market based on OpenGL 1.x that breaks backwards compatibility - OpenGL ES 1.0. So to the extent that Neil busted his behind attracting so much of the cell phone (and more recently, handheld and console) industry into Khronos, I’d say 3Dlabs has been extremely successful in doing this - even if the resulting API doesn’t look very much like the “OpenGL 2.0” white papers you guys were circulating a few years ago :slight_smile:

Jon (still unclear on why 3Dlabs is so interested in the mobile market when you’ve never had products for it - my pet theory is that you hope to escape the ATI/NVIDIA juggernaut by turning into a downmarket IP core provider ala Imagination - but I suppose we’ll find out someday :confused: )

I feel a bit out of place posting here, as I’m brand new to openGL and I’m surely not as experienced as other people who posted here.

I just wanted to remind that, in general terms, the main purpose when developing software should be to make it as much effective and as less complex as possible. While I feel there are no doubts on openGL’s effectivness, 12 years of incremental add-ons to the core are making it not so simple as it used to be.

I don’t think it takes some software genius to understand that this trend, if still acceptable now, will most likely lead to bigger problems as the time passes by. That’s why I really wish the API undergoes some radical redesign sooner or later, while leaving more competent people to think how to do that.

Good work pales!
Byez =8)!
Ob1

Originally posted by barthold:
The idea is to provide an API based on OpenGL 1.x that is allowed to break backwards compatibility. This would allow for a thorough cleanup, if desired. This pure API would be aimed at developers that are willing to start from scratch, for whatever reason.
And it would have been very cool. I started learning OpenGL roughly two years ago. When I saw the first docs speaking of OpenGL 2.0 which claimed pure programmable functionality and uniform handling of all objects (textures, vertices, etc…) I thought that learning OpenGL 1.x wasn’t worth it, better wait for 2.0. It looked pretty much like a container which would hold all future features. Naturally things changed and I ended up with 1.x anyway but consider other people approaching OpenGL for the first time. The 1.x spec is enormous, there is plenty of redundant fixed functionality in it which is not useful since you end up with vertex/fragments shader anyway so why bother? It justs makes OpenGL harder to learn for beginners and harder to implement for new players in the gfx chip/driver market.

Well you’re talking a lot about people from my end of the spectrum (i.e. total n00bs), so I’ll give my opinion.

First, I like OGL because it’s elegant, and I want to learn OGL. However, I took one look at the extensions registry, and I was like “Holy Sh1t!” I heard about OGL 2.0, and I decided that the only reasonable way to lean OGL was to wait for 2.0. Of course, since the apparent origional focus of OGL 2.0 has been smothered in the politics of ARB I left with a very large amount of info to sort through to figure out what’s important and what isn’t. D3D is looking very attractive now. I think I’ll still try to learn OGL, but without the freash new version 2.0 it’s going to be a serious uphill battle.

I only hope, that someone has the courage to but OGL on a treadmill and burn off some of the flab. There is no compatibility issues so long as you keep OGL 2.0 in a new driver (OpenGL2_32.dll and OpenGL2_64.dll) and header files (GL2.h), and the legacy crap where it is now. Old programs that use the bloated (hmm… makes Microsoft products look simple) OGL 1.X just continue using the older version. Of course, that version is fixed now, and driver writers at the IHVs can focus on the new driver, which because it’s simpler, they can optimise more.

And will someone do something to Microsoft so that they will support a new OGL version, so no one has to use the get extension commands. Like sue them for the abuse of their monopoly, because they are hindering OGL to support D3D. I’m sure you would be very successful.

I wouldn’t worry too much about the extensions. After you spend time with them you’ll find them much more sensible. You can trim out lot of those ext extensions so the list gets smaller. I find that both nv and ati share lots of the same extensions which makes things easier. So if I were you I would look up those arb extensions first then ext ones for things not in arb and lastly ihv ones. You’ll find most functionality in arb extensions as many ext ones have been moved there for new gfx drivers.

Well, since everyone seems to have an opinion on this, I’ll give my two cents too :^). Why is it that some people seem to dislike the huge amount of extensions? Nobody forces one to use them … I feel that there is nothing wrong with staying with well defined (and documented) core functionality and use extensions as needed, since these are a great way of adding vendor specific functionality without “polluting” the core.

I never said I didn’t like them, just that every now and then it’s necessary to do a little spring cleaning, so to speak. This is definitly a good time to do that. I suppose it would be ok though if they seperated the registry into useless crap (legacy stuff that exists only for backwards compatibility), and useful crap (Extensions that developers should care about).

I think the extensions are an excellent way to allow graphics to grow at it’s own pace, rather then the pace dictated by say Microsoft.

Why is it that some people seem to dislike the huge amount of extensions?
Well, let’s see. Would you rather use:

1: A horribly cluttered API that has 5 ways of doing everything, and the performance therein is all implementation dependent so you can’t entirely know how to use it correctly without guess-and-test, or

2: A clean API that has precisely one way of doing something.

The rational choice, baring prior experience with one or the other, or some lack of functionality in one of them, is 2.

That’s the problem that OpenGL faces with D3D. It is the cleaner API. While this may not affect the old guard who grew up with OpenGL and have been using it for 3+ years, this does affect any neophytes who may want to use it. Nowadays, the only real thing, besides the absolute best possible performance (and that’s with a good driver), that OpenGL offers over D3D is functioning on non-Windows machines. It’s pretty sad when the competing API can only boast about being compatible across multiple platforms.

4 quick points…

  1. First of all, if as a GL newbie you’re starting in the extensions registry, you’re starting in the wrong place. Try the Red Book.

  2. The problem isn’t that there are too many extensions, it’s that I don’t know which systems I can expect to support them. Although the specs are typically thorough, I’ve never had luck finding documentation on what chipsets/drivers support which extensions. It’s great that the latest CATALYST drivers support SmoothVision2.1, but I need specs that are developer friendly, not just marketing friendly. And it’s certainly not giving away sensitive information to say whether that corresponds to ATI text fragment shaders, ARB fragment programs or GLSL fragment shaders, since as a developer I’m going to find out anyway, and pretty early on in development. So, I have to check as many systems as I can get my hands on in order to see which extensions are supported by which systems. As a developer I gladly take on the responsibility of determining whether an extension’s support on a given chipset with a given driver is adequate for my app’s purposes, but just having a list of nominally supported extensions for each chipset/driver should be widely available information that each developer shouldn’t have to perform an archaeological dig to uncover. Apple, since they’re in a convenient position to do so with their limited hardware/driver configurations, have such a list at http://developer.apple.com/opengl/extensions.html, but I’m not aware of a Windows or Linux list. And if those lists exist, the only copy shouldn’t be on Joe programmer’s web site, but should be posted on opengl.org or by individual IHV’s. And if opengl.org or IHV’s do publish that info, please give me a link and I’ll gladly shut up about it. :slight_smile:

  3. GL is analogous to ANSI C in that there is more standardized functionality than most people will ever use. This is a good thing. Although I can’t remember the last time I used vsnprintf, I sleep well knowing that it’s there. And although I don’t plan to use it, I defend any developer’s right to do so.

  4. GL is analogous to ANSI C in that it’s overdue for a backward-compatible yet forward-looking replacement. This is a bad thing. C got its replacement in C++ (complete with a new stdlib), but pure GL2 is no longer on the immediate drawing board. Along with a streamlined core, it would be nice to have higher levels of abstraction provided in a separate but standard library sanctioned by the ARB. I like abstraction not because I’m lazy, but because it provides more room for optimizations in the driver or on silicon, and leaves me to solve more interesting problems (OK, and sometimes because I’m lazy). For instance, without a mesh primitive, geometry optimizations are relegated to the world of triangles. There are countless abstractions for a collection of triangles, and it would be nice to have a standard set of those collections for a variety of purposes, just like STL does for C++ with vector, list, map, etc. Wouldn’t it be great to have a higher-level library for triangle collections like Mesh, IndexedFaceSet, BSPTree and OctTree; or image loaders like JPG, PNG and TIFF? Sure these libraries already exist, but the great thing about standardization is that everyone speaks the same language when talking about these features; I can talk to any C++ programmer very succinctly about std::vector, but when it comes to something like a “triangle mesh” in OpenGL, there’s a lot more explaining to do. (Is it implemented with a single VBO? What does the data structure look like? What operations does it support? Does it support LOD selection? How much of it can you typically see at once?) Since the universe of OpenGL apps is inherently less diverse than the universe of C/C++ apps (aside from non-C bindings for GL, the former universe is a subset of the latter), it’s interesting that the more diverse universe has seen more standardization through ANSI and ISO committees than OpenGL through the ARB. But I’m not blaming the ARB, since it’s just a collection of largely corporate interests. Developers, myself included, need to have more solidarity in pushing for standards, or even proposing them. The silent majority in the OpenGL community is the developers. There’s a reason ATI, NVIDIA and 3DLabs spend inordinate sums of money woing developers: we write the apps that make their products look good. We need to exercise our voice in active and constructive ways. Whether it’s Carmack or Stroustrup, all revolutions can point to individuals that made a stand and pushed their movement forward, while others did little more than focus on their complacent discontent with the status quo. So which are you, a revolutionary or a malcontent?

Originally posted by {T5K}DudeMiester:
[b]I never said I didn’t like them, just that every now and then it’s necessary to do a little spring cleaning, so to speak. This is definitly a good time to do that. I suppose it would be ok though if they seperated the registry into useless crap (legacy stuff that exists only for backwards compatibility), and useful crap (Extensions that developers should care about).

I think the extensions are an excellent way to allow graphics to grow at it’s own pace, rather then the pace dictated by say Microsoft.[/b]
If you want to learn about what’s what, then you should get advice, read books (The extension guide), read tutorials … not technical documents.

That’s the problem that OpenGL faces with D3D. It is the cleaner API.
Not much cleaner. For example, they have version numbers for their vertex and pixel(fragment) shaders, but there are differences between the versions. There is no backwards compatibility.
I’m assuming it will be the same with HLSL.

Anyway, noone said GL is THE most perfect API. GL2 won’t be either, if anyone was waiting for that.

…but I’m not aware of a Windows or Linux list.

Try http://www.delphi3d.net/hardware/index.php

Originally posted by Corrail:
[b] [quote]
…but I’m not aware of a Windows or Linux list.

Try http://www.delphi3d.net/hardware/index.php [/b][/QUOTE]Thanks for the link! I’ll shut up now, but only about that point. :slight_smile:

A horribly cluttered API that has 5 ways of doing everything, and the performance therein is all implementation dependent so you can’t entirely know how to use it correctly without guess-and-test
I think the OpenGL core is very clear and well documented - and having several ways of (for example) sending vertex data, each one useful under certain circumstances (well ok, I won’t cry if compiled vertex arrays will vanish) [bold]is[/bold] an advantage. Maybe it could be cleaned up, but I don’t think that there are really that many outdated functions in it.

However, I won’t blame OpenGL if several hardware vendors propose similar functionality with different extensions (and I don’t think that this behaviour would change with OpenGL 2.0, since everyone wants to sell their products, giving them sounding names - and after all: different hardware will expose different functionality and performance under different circumstances, will it not?) - if it’s useful enough, the ARB will grab it some time, and if I look only at the OpenGL core and ARB functionality, I think that I have a nice and clean API.

Originally posted by maxuser:
4. GL is analogous to ANSI C in that it’s overdue for a backward-compatible yet forward-looking replacement. This is a bad thing. C got its replacement in C++ (complete with a new stdlib)
BS, FUD, uninformed alert.

C99 is the last standardized version of C standardized in, you guessed it, 1999.

C++ is good. C++ is not a replacement for C. For some projects C is the better choice, for some projects C++ is the better choice.

Read Herb Sutter’s article in this months C/C++ Users Journal. He outlines the work currently being done on the next versions of both C and C++.

Let me quote Stroustrup as well. “Remember that proof by analogy is fraud.”

My personal view is that the core should not be touched but extensions should be deprecated. The PDF for the OpenGL 1.5 spec is 333 pages. That’s fine. That’s manageable and concepts are pretty easy to find.

Experience is all about recognizing patterns and analogies with past experiences and applying those lessons to the present where applicable. OpenGL has accomplished this through the work of the ARB. OpenGL 1.0 was replete with individual pieces of state that the user must manage and coordinate with other related pieces of state. Every version of OpenGL since then has introduced increasingly object-oriented state management: texture objects, VBO’s, GLSL program objects.

This is precisely the direct analogy I’m making with the change in mainstream programming from C to C++, just as OpenGL itself has evolved from entirely state-based to more object-based. And it’s not a fraudulous analogy, it’s a general programming trend. (Another aspect of the analogy is “thicker” standard library support, but I’ll leave that for another time…) Sure, loads of people still prefer C over C++, and I support their right to do so; but I think it’s generally a good thing that developers have largely migrated to a language that natively supports the notion of objects, i.e. well-defined collections of data with well-defined operations on those data.

That’s all I want for OpenGL, a new or revised incarnation (largely what GL2 was intended to be) that still supports the old state-based ways, but has core facilities for thinking, designing and implementing graphics apps in an object-oriented way. The analogy with C/C++ is certainly broken in many ways and like any analogy it can be taken too far. For instance, GL has evolved a notion of objects where C never has, so GL is clearly on the right path whereas C has always been steadfast about not strongly coupling data with functionality at a fundamental level.

If anyone is fearful, uncertain or doubtful about GL’s future as a result of my previous post, I urge you to read it again with an open mind. That certainly wasn’t my intent.

P.S. PK, I can get you a great discount on a brand new abacus if you’re interested in the latest version of an outdated tool. :wink:

Originally posted by maxuser:
This is precisely the direct analogy I’m making with the change in mainstream programming from C to C++, just as OpenGL itself has evolved from entirely state-based to more object-based.
Your analogy is correct IMHO but you are missing an important point. A lot of OO languages have emerged after C++ like Java/Ruby/Objective-C/whathever. And among all of those C++ is certainly not anymore the most popular, yeah it is still widely used, but it is getting dropped in favour of ‘cleaner’ languages. Why? Because it is awkward to use, because it is a hybrid between a functional language and an OO one, because keeping compatibility with C burdens it, etc… And that’s exactly the same problem OpenGL 1.x has, the way you use it has changed towards programmability and object-orientation. So what’s the reason for keeping the legacy inside the new spec? It just makes it awkward to learn and to implement, it bloats the implementation and it requires profiling because performance is not consistent across the different methods used to achieve the same results.

Originally posted by maxuser:
1. First of all, if as a GL newbie you’re starting in the extensions registry, you’re starting in the wrong place. Try the Red Book.

You mean the original revision of the book, right? Just kidding. :smiley:

Anyway, it’d be cool to have a brand new opengl core but I like the way GL is “cluttered”. It’s easy for newbies and it’s easy to roll out quick tests.

The mess in GL is directly related to its ideal platform as a test-bed for new and specific functionality (IHV extensions). The ARB is too slow defining ARB extensions that provide general interfaces for different hardware. But that is changing… :slight_smile:

Well, if OpenGL did not expose a clean procedural API it would be much harder to integrate in different programming languages (and operating systems), since the underlying class system had to be exported as well (which does prove, in my opinion, that there are cases where a procedural approach is superior compared to an object oriented approach - just look at ODE, which is a great example of object oriented design, but the exposed API is “only” procedural).

How this is achieved internally, is not important - an OpenGL implementation can be (and most probably are) object oriented as hell, as long as the exposed API is procedural, it will be no problem to integrate it anywhere (and probably build another set of classes, however organized, with it) - to debate if OpenGL should be “object oriented” is meaningless therefore IMHO.

I think it would be more useful, if you would tell what exactly the “useless legacy” is, you want to be removed from the core specification: 8 Bit modes were mentioned somewhere (but why not keep them exposed in software - there will always be some embedded devices, which may get use of it, and usable code exists since the days of SGIs OpenGL 1.1 software implementation), compiled vertex arrays are something, I could live without (but keeping the spec won’t hurt anyone), OpenGL lighting was mentioned (I’d hate to see that go - it’s really easy to set up simple scenarios with it, which is quite important for people starting to learn OpenGL), evaluators are (for example) useful to build terrains with a small memory footprint - what is it, you’d like to see to vanish forever?

Originally posted by crystall:
Your analogy is correct IMHO but you are missing an important point. A lot of OO languages have emerged after C++ like Java/Ruby/Objective-C/whathever. And among all of those C++ is certainly not anymore the most popular, yeah it is still widely used, but it is getting dropped in favour of ‘cleaner’ languages. Why? Because it is awkward to use, because it is a hybrid between a functional language and an OO one, because keeping compatibility with C burdens it, etc…
C++ is functional? You mean procedual. Is this bad? OO is only a design pattern. In Python for example all things are objects but there are also function(which are objects too). I don’t believe there is only one way to do all things.

Originally posted by V-man:
HANDLE vs ID :

The spec gives some hints about design dicisions. The case for app assigned ID versus GL assigned handle for the shading spec. Read it if you wish.

It’s not consistent with old GL, but ever since I started using GL, I wondered why the dicision was made to allow users to assign any ID we like to display lists and textures.

Because you can only create OpenGL object ids in the thread that has the active OpenGL context. If you are in a multi-threaded environment, only one thread can have the context bound, and that’s usually not the thread that the application works in. For these situations it’s much nicer to be able to manage the id handling totally separate from OpenGL in your own manager. Been there, doing that, pulling my hair about how to support the new model (not really, but it is more painful than the old way).

Probably some other systems have unique object ids anyway, and could use those as OpenGL ids, or id numbers could contain meaning or be derivable from other information about the object. Not a big deal, but a lot of OpenGL has been designed to make the developer’s life easier. And it shows, OpenGL (core) really is very elegant, simple and flexible.

That also goes for many things that apparently a bunch of people consider useless, like immediate mode. The great thing about immediate mode is that it allows you to totally freely define where your data is coming from. Whatever format you store in the non-graphics part of your software, you can directly feed it to OpenGL. You can also do crazy stuff like multi-indexing, which is not possible without copying in VertexArrays and related variants.

Yes, you can implement all the different geometry specification methods using each other. But going from immediate mode to VBO can be a lot of work for the application programmer, of which we (hopefully) have much more than driver developers. So from an economic point of view it’s better for the driver developers to spend time on this. I’m aware of the fact that driver developer time is a scarce resource, but if cass says supporting (at least most) of the old extensions is not a big deal, why am I to argue? Not to talk about backwards compatibility and software that was written 10 years ago and still works…

The success of OpenGL is partially due to its flexibility and simplicity, which lets people with whatever background use it easily for whatever purpose. Let’s not throw that away.

Just my .02$

Dirk