OpenGL 2.0 formally announced!

I just want to say that I think it’s better to have a bad object model than having 2 mixed object models. And I don’t think the handle system has ultimate benefits over the Gen/Delete/Is, futher than a bit less work for the driver developer.

Originally posted by martinho_:
I just want to say that I think it’s better to have a bad object model than having 2 mixed object models. And I don’t think the handle system has ultimate benefits over the Gen/Delete/Is, futher than a bit less work for the driver developer.
It shouldn’t be more work because they can reuse the code for textureID, calllistID and such.

This is not a big deal.

There are bigger issues like the super buffer extension, which seems to be a in trouble.

There are bigger issues like the super buffer extension, which seems to be a in trouble.
According to the PDF, superbuffers is, effective dead/stalled/in limbo. Instead, we’ll get something more like EXT_render_target, plus other extensions that add functionality to this base. This is all to the good, as EXT_render_target was a far simpler, more reasonable, extension than superbuffers. It’s just that we should have had it by now.

Yes, does anybody know about the state of EXT_render_target ? AFAIK even if we “get” it soon, that would only be the final spec, no? So we can assume, that it will take even longer, until we get a working implemention.
It´s really nice, to know, that we get such a clear and simple way for render-to-texture, but it is really a shame, that we didn´t get it 5 years ago.

Jan.

Hopefully, we’ll get a copy of the most recent meeting notes sometime soon.

The EXT_rt spec was not entirely far from final. My principle concern is that other members of the ARB don’t sabotage the cleanliness of the extension by forcing a lot of extraneous functionality into it (rather than simply extending RT with another extension).

Since nVidia/3DLabs/Apple were the ones behind the original EXT_rt spec, and since nVidia is pretty quick about getting extensions into hardware, I imagine that they’ll have an implementation available pretty quickly after the spec is released. ATi might take longer, but hopefully, they can leverage the superbuffers work they’ve done, simply converting EXT_rt commands into their internal superbuffers API (I recall that some of their drivers from a year ago did have some preliminary superbuffers entrypoints).

It’s a shame that they canceled the original 2.0 specs.

Whats really missing from my point of view are floating point buffers on ALL texture formats. Not a ultrafat überbuffer-API where the ARB needs at least two years to get it to the core, but some stuff that ist useable today as an ARB extensionor core feature, not the fuzzy mess of vendor specific stuff. I’ve switched to D3D (shame on me) and the support for float textures just rocks… not all of us use stencil shadows, and float textures are the way to go for shadow mapping. Another point is the overly bad support for rendering to a texture. How many extensions do I need, to just render to a texture? This sucks. The slow progress of the ARB is the main reason for me, not to use GL anymore. It would be nice to see features much faster promoted to the core or at least to ARB extensions. Also the quality of extensions should be better. Reading always: “This is left to another extension” annoys me really. GL2.0 is just another promotion of unfinished extensions to the core, when will the ARB learn to nail things down in a complete way? And having a high level language compiler at driver level is a thing I don’t really like.

Just my two cents.

http://pc.gamespy.com/pc/doom-3/539265p1.html

Second Video, 4th minute, 20th second.

Jan.

Maybe you can tell, whats in the video?
So I dont’t have to install ugly Qt-Player or download a 100 MB file.

if its the bit i’m thinking of then its JC basicaly laying into NV and ATI for not working together to get things sorted out properly, in relation to render-to-texture and saying him self that it was the closest he’s come to switching to D3D coz the API is such a mess in that regard

This is just one of those cases where the world would have been a better place if 2 years ago a couple of companies had gotten together and defined an EXT_render_to_texture extension.

Instead we tried to solve too much and get everyone to agree. OpenGL will grow fastest (and best) by accretion of proven extensions.
Committees are the wrong place for top-down design – especially committees of competitors.

I sympathize with developers and indeed share their frustration with the situation. There’s no valid excuse for letting this important functionality languish for so long.

Thanks -
Cass

Now if only NVIDIA and ATI could put that into practice a bit more (yep in some areas you’ve both done a good job). I’m still surprised glslang is viable today, so kudos for cooperating on that in the end, but to casual observers it does seem that consistently putting that sentiment into practice is the biggest sticking point.

It’s one thing to complain about the ARB, but that implies NVIDIA and ATI would get on smoothly without it. Maybe for some stuff, but in other areas the ARB has forced cooperation (or capitulation).

And FWIW I wouldn’t mind a bit if ATI & NVIDIA ran off with the ball and defined the future of OpenGL graphics cooperatively, from my perspective that would represent ‘core’ extensions, I just don’t see it happening. NVAT_* could be better than ARB_ if it was guaranteed support by both, heck just use EXT, who cares.

Before 2.0 is ready, I would like to call your attention to one issue that came up recentrly in this list. It is the lack of an easy and simple way to set an origin for the vertex element calls - glDrawElements, etc. Currently one can do it by re-setting the vertex pointers with a bungh of gl calls: glVertexPointer(…); glColorPointer(…); glActiveClientTexture(GL_TEXTURE1); glTexCoordPointer();…
It would be good if some of the ARB members notice this issue and raise a debate about it at the ARB meetings before 2.0 is out, because it seems that there is a good deal of interest about it among the developers. Someone in the mailing list was already prepearing a specification draft for possible such extension, so they could take a look at it.

Originally posted by Korval:
[b][QUOTE]OpenGL seems to be keeping pace with developments
Which OpenGL are you looking at?

OpenGL adopts things much slower than any competing API, and therefore is not “keeping pace with developments”.
[/b]

Just a question about D3D. I haven’t written D3D code since 1997, but back then I remember the API changing so drastically from release to release that you couldn’t count on DX5 code running on DX7, etc. Is D3D maintaining backwards compatibility these days, or have they settled on a core API?

OpenGL may never keep up with D3D on the new features, but so what? Every “next-generation” graphics card is coming out at an increasingly higher price, it takes a year or two for the technology to be affordable to the majority of 3D card consumers anyway.

Also, about “not keeping up with competitors” - what other competitor is there besides D3D? I can’t think of any other API that competes with D3D, I don’t see the future of GL in any kind of danger.

D3D is not changing so drastically anymore, and many parts of D3D can be directly mapped to GL. It’s in fact possible to automate the process of translating D3D to GL and vice-versa.

GL, like all technology, needs innovation, or else…
Cost is not relevant and it has nothing to do with what the consumer wants.

I don’t see the future of GL in any kind of danger
It’s nice that GL2 is making headlines.

PS: the competitors will be software renderers, me thinks