ObenJL

Korval, I’d like to offer a correction to your post.

There is significant overlap in the sets of people that participate in the OpenGL and OpenCL working groups. You might be right to state that the OpenGL specification makes no direct reference to OpenCL and that would be correct. But plainly to deliver on the interoperability premise for OpenCL/GL, there will need to be implementation level communication between each section of the driver in order to make it all work. Members of the OpenGL ARB have been active participants in setting the goals for that interface’s design.

In the meantime, since the OpenGL 3.0 spec was completed we have been working steadily on 3.1. The goal of the working group is to stay on a steady release schedule from 3.0 onward, at present I believe we’re still well positioned to meet our next goal.

There were a couple features that did not make 3.0, they go in 3.1. There will probably be a couple features that did not make 3.1, and we’ll aim those at 3.2 if that happens.

Yes, that’s exactly the idea.

<prophet-mode>

The bright side is that in near future the state of GL will be less of pain for one reason: we will migrate most part of our renderers out of it. The only things still done with GL will be filling g-buffers and shadowmaps. The real work will be done with OpenCL.

Consider today’s deferred shading architecture. Every drawing command that submits window aligned quad(s) as geometry (a trivial one), soon will be seen as a candidate to be reimplemented in OpenCL. No geometry, no need for rasterisation API.

</prophet-mode>

You say potato… Beta is “out” compare to ATI. And who knows which extensions they’ll include and whether they’ll be written against 2.1 or 3.0. Not a situation I bet programming hours on.

From a programming perspective I still would prefer a debug profile. Unless, of course, the performance profile comes with a debug performance profile. Profiles still seems stupid to me. I’d take an improved API over a “profile” any day.

Ok, I concede. I’ll wait up to FOUR months to see if this pans out.

Yeah, don’t get me started on that one.

I’d say that if the same exact people are working on OpenCL as OpenGL then one would definitely suffer. While there might be some crossover, it’s unlikely that OpenCL would directly impact GL3.1. They could be timing the releases together which would be nice PR-wise. “Hey look at these two great interacting APIs!”

There is significant overlap in the sets of people that participate in the OpenGL and OpenCL working groups. You might be right to state that the OpenGL specification makes no direct reference to OpenCL and that would be correct. But plainly to deliver on the interoperability premise for OpenCL/GL, there will need to be implementation level communication between each section of the driver in order to make it all work. Members of the OpenGL ARB have been active participants in setting the goals for that interface’s design.

Implementation-level communication between the two is, well, an implementation detail. It should not have any effect on the progress of the OpenGL specification. If the development of a related-by-implementation-only specification can dramatically impact the development of the other specification, then the Khronos clearly needs more people dealing with both specs, so that the development of one does not hamper the development of the other.

The goal of the working group is to stay on a steady release schedule from 3.0 onward, at present I believe we’re still well positioned to meet our next goal.

There were a couple features that did not make 3.0, they go in 3.1. There will probably be a couple features that did not make 3.1, and we’ll aim those at 3.2 if that happens.

So, is there any substantive information coming out in the near future? Like what is actually planned for 3.1, when you plan to release it, or what the long-term schedule for features is?

The bright side is that in near future the state of GL will be less of pain for one reason: we will migrate most part of our renderers out of it. The only things still done with GL will be filling g-buffers and shadowmaps. The real work will be done with OpenCL.

If you say so. I suppose that “near future” will be coming in the same “3-5 year” timeline that just about every other cool invention that never arrives will be coming in?

Consider today’s deferred shading architecture.

Which incidentally, very few people actually use. For most developers, it’s more trouble than it’s worth.

whether they’ll be written against 2.1 or 3.0.

There are extension specs that are in common use that were written against 1.2. What they’re written against is irrelevant; what matters is the utility of the extension and who supports it.

And they’re written against 2.1 so that 2.1 users can, you know, use them.

I am using deferred shading. It is complete non-sense to say, that this reduces complexity. It CHANGES how you do lighting. That’s it. You still have to push polygons through the pipeline. You still need to manage lots of shaders, textures and other states, both when filling the G-Buffer, and when doing the lighting-passes. State-management is really the biggest mess here, especially with multiple FBOs which share some state, but have some private state, too. You still need to struggle with all the different texture-formats, with early-z/stencil, and of course, broken drivers and a bloated API in general. None of that changes, when you do deferred shading. Some of it actually gets even more trouble, because you have to do so much render-to-texture, which is where all the state-mess makes everything fall apart even more easily.

I see 3 different ways how to work with OpenGL:

  1. Just do it: Everyone on your project sets OpenGL states as he needs it. This is extremely buggy.

  2. Some higher level abstraction: You have some things in the API abstracted (like shader and texture-management). You track some states and set only when the change. Especially you reset some states to a default-value several times per frame, just to make sure, because you mix “managed” OpenGL state-changes and direct state-changes, most certainly because not everyone on the team obeys to your abstractions, or because not all vital states are abstracted.
    In this scenario you have SOME tolerance to wild state-changes, but you most certainly do many redundant state-changes, to reset OpenGL to default-states.

  3. You abstract the whole API, everyone only uses the abstraction. Well, this is powerful (prevent redundant state-changes AND always have reliable states) but A LOT of work. Also you can’t just plug such an abstraction into an existing application easily, it is better to start with it right away. But then you need such an abstraction in the first place.

I am stuck at 2) and the reason to use 3) is simply that OpenGL is such a mess. If it were cleaner (as promised for GL3) only very few people would consider/need a full abstraction. Mostly for easier porting to other APIs, as well.

Oh, and i don’t think OpenCL will be the killer-feature, if you are only doing graphics. I see its point and maybe it will become a valuable addition to implement specific features, but when you are doing only graphics (no scientific simulations or some other number crunching), i don’t think it will the holy grail some people think it is.

Jan.

I was opining against the utility of GL3 especially with regards to ATI. I brought up extensions as they are the biggest unknown with ATI. And if they’re written against GL3 then 2.1 users may lose out. As I said, it’s an unknown that one cannot in good faith plan against.

We assume right now that ATI will put out a fully compliant GL3 driver. However, we also know that NV has had a number of useful extensions that have yet to be rolled into 3.x. Without those extensions GL cannot claim to support D3D10 features. And so, ATI could not say that their GL drivers will be as complete as their D3D driver.

So what again do I get for moving to GL3? If I stick with NV cards I have the same feature-set as what was in 2.1. If I go with ATI I will get more than I got from them in 2.1 (since they didn’t expose many of their D3D10 features via extensions) but I still get less than NV because I know nothing of their extensions.

And if we’re lucky and the ARB puts out 3.1 soon, ATI will just be that much more behind. So what would I gain from 3.0?

Without those extensions GL cannot claim to support D3D10 features.

So, if you don’t support everything, you’re considered to not support anything? How does that logic work?

To be honest, the only significant D3D10 features (note: not API issues; actual hardware features) that are not core GL 3.0 are instancing, geometry shaders, and bindable uniforms. Instancing is defined by ARB extensions now, and ATi said that they would be supporting ARB extensions. Now granted, they didn’t say which ones.

Geometry shaders are unused trash. Nobody bothers with them for anything serious, so they’re no real loss. It’s one of those “sounded like a good idea at the time” features.

As for bindable uniforms, the EXT_bindable_uniform spec is, to put it nicely, garbage. It defines absolutely nothing about how to use the bound uniform buffers. That is, how to properly put data in them and so forth. It specifically says that this is an implementation detail, which makes the extension worthless. There’s no reason for ATi to implement a poorly written extension.

The point being that GL3 does what people actually care about from D3D10.

So what again do I get for moving to GL3?

“Moving to GL3”? You make it sound like a chore or something. “Moving to GL3” basically means one thing:

1: Supporting the new context creation mechanism. It would take you maybe 5 minutes to add this.

Because GL 3.0 doesn’t actually remove anything, you can use a 3.0 context exactly like you would a 2.1 one. If you want to start using some of the 3.0 features (VAO, new texture formats, etc), you can. You can even switch between 3.0 and 2.1, because they back-ported some of the API features of 3.0 into 2.1 through core extensions (GL extensions that don’t have a suffix for extension functions). So you can check for GL 3.0 or ARB_vertex_array_object, and you don’t even have to query different function pointers.

maybe in the new pipeline newsletter… oh wait, they wanted to do them again but never said when to start. :wink:

i would also like to know more about the process.

So, if you don’t support everything, you’re considered to not support anything? How does that logic work?[/QUOTE]

Um, in the next sentence you list exactly what’s missing from core GL3 that would give it “support for D3D10 features”. I’m claiming that extensions are not equivalent to true support because you cannot rely on them between vendors. I’m bemoaning the progress of GL.

You won’t hear me argue this. However, it’s a hardware feature that’s not directly supported in GL3 core. Silly. Sure, I’m against API bloat and if IHVs decide to not further GS in the future then it’s a waste to include it now. Unfortunately the reality is that GS will be around for awhile and lucky for us is likely to improve in performance if not practicality. GL not including it in core is a huge deficiency along with the other D3D10 features that D3D10 has included in its API since release.

I wasn’t claiming it would be difficult to do so but that there is no purpose or gain for doing so. Should I rephrase? What are the benefits to adopting GL3 instead of staying with 2.1?

Right now, all I know is that I lose compatibility with pre-SM4.0 hardware. For what, exactly?

I am very interested in what will happen in 3.1 that may make me “move” to it.

They removed support for older hardware. Or, at least we’ve been led to believe that drivers for 3.0 will not be released for that older hardware. (Beta driver hardware requirements)

It’s correct that a GL 3.0 context can only be created on SM4 hardware. However a set of extensions have been authored allowing vendors to implement subsets of the full GL3 on a GL2 context, iirc AMD already has some of those shipping. Talk to your IHV’s about exporting those if they haven’t done it yet and you have a need for them.

re GL 3.1, I don’t know of anything there that would move the hardware floor again.

You know, when you are NOT a AAA game-developer, it is pure luck, whether someone cares for your problems. I’ve not even been accepted for nVidias developer forums (not that i care much, but they seem to be pretty strict about it). That is one reason, why people would like to see extension moved to core, so that they do not need to convince IHVs whether something is useful, because most of us are not considered important to listen to by IHVs.

Jan.

hehe yeah, university research doesnt seem to have a lobby for them, no success on dev forum either :wink:
However I found just contacting certain nvi staff/researchers directly quite successful and they would further hand feedback/issues down the pipe.

Um, in the next sentence you list exactly what’s missing from core GL3 that would give it “support for D3D10 features”. I’m claiming that extensions are not equivalent to true support because you cannot rely on them between vendors.

My point is that there is a difference betweeen “support for D3D10 features” and “support for all D3D10 features.” OpenGL 3.0 has the former, not the latter, and nobody’s hiding that fact.

I’m bemoaning the progress of GL.

Then stop talking about vague nonsense like “D3D10 features” and start saying what specific features you want to have.

GL not including it in core is a huge deficiency along with the other D3D10 features that D3D10 has included in its API since release.

No, it is not. Something is a “huge” deficiency if it is useful. Like uniform buffers. Or being able to have vertex and fragment programs as separate objects that you can mix & match without long link times. And so on.

Again, real-world priorities are what matters, not some featureset that Microsoft decided was important.

What are the benefits to adopting GL3 instead of staying with 2.1?

Right now, all I know is that I lose compatibility with pre-SM4.0 hardware. For what, exactly?

Um, why would you lose compatibility with non-DX10 hardware?

The only way you would lose compatibility with lower hardware is if you actually use 3.0 only features without any form of fallbacks. And if you do that, then you are explicitly agreeing to DX10 hardware, just as sure as your use of glslang means that you are agreeing to use DX9 hardware.

If you want DX9 hardware support, you would need a GL 2.1 path. Or, if that’s too much work, make your GL 2.1 path your GL 3.0 path. That is, don’t rely on 3.0 features, but do create 3.0 GL contexts.

As for what you gain, did you not read the spec? Or the various non-spec lists of core GL 3.0 features? Those are what you gain. Or, since you like D3D comparisons, everything in D3D10 except for the things I mentioned. Plus some decent API features.

These are available as extensions on 2.1 implementations, but only to the degree that they are widely supported, as Jan points out. If you can create a 3.0 implementation, then the implementation is guaranteed to support these.

GL 3.0 isn’t good if you wanted actual API cleanup, but saying that it provides no hardware features is flat-out wrong.

Right now, all I know is that I lose compatibility with pre-SM4.0 hardware. For what, exactly?

If WGL_ARB_create_context is not available, then you can make a context the old way and most of your code is just fine.
The difference between 2.1 and 3.0 is not huge.
GL 3 is another code path in your engine.

It sounds more like you were expecting that GL 3 be available on everything that was GL 2.1
We were expecting Long Peaks to be available on everything that was GL 2.1 but Long Peaks is dead. Get that out of your mind.

Well I personally stop at Direct3D 9. But regarding who claims that Id’s interest in GL is because they have Linux port is non-ground silly argument. Id’s favoring of GL is because it’s GL superior to D3D in many ways only ATI and some MS advocates dont want to admit.

D3D is simply dying…as I see many giant IHVs are very interested in GL 3.0 and looking forward for the forthcoming versions.

Id’s favoring of GL is because it’s GL superior to D3D in many ways only ATI and some MS advocates dont want to admit.

Well, since ATi directly controls the quality of 30-60% of OpenGL implementations, I’d say that their opinion on the matter is very relevant. Furthermore, please enlighten us on the alleged “superiority” of OpenGL over D3D. That doesn’t involve cross-platform development.

I see many giant IHVs are very interested in GL 3.0 and looking forward for the forthcoming versions.

Really? Name them. And provide proof that they are “very interested” in GL 3.0.

Korval, I really don’t understand your attitude as a GL guru defending D3D. But anyway OpenGL is the industry standard, at least in CAD world. D3D is dominant in gaming industry but there’s technical reasons too not to avoid DirectX as a whole set of APIs that serve different purposes. But in terms of functionality and performance remind you of Id’s Tech 5, Quakes, Dooms, WoW, and other dual-API games. I agree that D3D is more reliable on ATI and Intel graphics hardware but it’s only implementation matter.

Korval, I really don’t understand your attitude as a GL guru defending D3D

The only thing I’m defending is the truth. The problem is that you’re clinging to whatever scraps of hope for the API that you can justify.

I wish OpenGL were a better API, a more well-supported API, an API that not-big developers could trust mission-critical applications to. But the simple fact is that it is none of those. Longs Peak could have gone a long way into making it those thing, but that failed.

Best to accept it and either use OpenGL knowing that it’s second-best, or switch to D3D.

OpenGL is the industry standard

That doesn’t mean anything. The HTML standard is meaningless in the real world: the Internet Explorer standard is what the Web runs on, and your browser had best follow it to some degree.

If a tree falls in the woods and nobody’s around, does it make a sound? If a standard exists but isn’t used, does it matter?

But in terms of functionality and performance remind you of Id’s Tech 5, Quakes, Dooms, WoW, and other dual-API games.

Games from 2 companies, one of them is dedicated to cross-platform Mac ports as part of its intrinsic culture, the other is dedicated to OpenGL for ideological and historical reasons.

Everyone else abandoned OpenGL long ago.