March'04 meeting notes

Nobody posted this… :stuck_out_tongue:
So, finally, we will get GL2 and the feature list is complete. No ubber buffer included but also no word from EXT_render_target…

And the markting group seems to be on the move…

Commnents?

Just to clarify… when they say that feature set will be included in OpenGL 2.0, does that mean those features will be included as extensions, or will they be included as core features?

Link for the lazy .

Originally posted by KRONOS:
No ubber buffer included but also no word from EXT_render_target…
But ATI_draw_buffers will be included so I think there will be an extension like EXT_render_target. Does anyone know something more about this issue?

Originally posted by mogumbo:
Just to clarify… when they say that feature set will be included in OpenGL 2.0, does that mean those features will be included as extensions, or will they be included as core features?
AFAIK as core features

Unfinished extensions, like EXT_render_target, aren’t up for consideration for the core. And for good reason. Even an extension like VBO needs to live its life as an extension before absorption into the core, just to make sure that there are no kinks or unseen problems in it.

Our dear friend John Stauffer from Apple (one of the 3 companies behind EXT_RT) did note that superbuffers was becoming complex, and that a different design could solve consumer’s problems.

Given this line “Instead, the WG will continue evolving its proposals and getting ISV feedback, allowing a chance for fully thought out alternate proposals to be brought forward.”, it sounds like they decided to hold off on superbuffers until alternatives like EXT_RT can be brought forth and tested.

BTW, nVidia, if you’re listening (and you care), I’ll buy your cards again if you give us this extension before ATi :wink:

But ATI_draw_buffers will be included so I think there will be an extension like EXT_render_target.
Considering that a vote wasn’t taken at this meeting on it, I doubt that it (EXT_RT) will be included into the core. You have to remember, it is neither implemented, nor even finished yet.

So what’s thrown out from 1.*?

And can we get implementations to fix polygon offset once and for all (sorry it’s a pet peeve).

Originally posted by Korval:
Considering that a vote wasn’t taken at this meeting on it, I doubt that it (EXT_RT) will be included into the core. You have to remember, it is neither implemented, nor even finished yet.
Yes, I know. But does that mean that you have to use pbuffer for MRT or render to texture in OpenGL 2.0?? I really won’t like that…

What’s the shader language metafile? Is it the equivalent of the effect files in d3d?

Originally posted by krychek:
What’s the shader language metafile? Is it the equivalent of the effect files in d3d?
From my complete ignorance, I would guess it’s a binary format for glsl (ala bytecode).

Originally posted by dorbie:
And can we get implementations to fix polygon offset once and for all (sorry it’s a pet peeve).
Hmmm interesting comment, in which way do implementations need to be fixed and how would you define the r factor for a floating point depth buffer?

So what’s thrown out from 1.*?
Nothing. 2.0 isn’t a revision, it’s just a new name for 1.6. They aren’t removing stuff or breaking backwards compatibility.

Yes, I know. But does that mean that you have to use pbuffer for MRT or render to texture in OpenGL 2.0?? I really won’t like that…
There’s nothing stopping nVidia/3DLabs/Apple from developing&implementing EXT_RT on top of 2.0. You can still have extensions, after all.

What’s the shader language metafile? Is it the equivalent of the effect files in d3d?
Since nVidia is behind it, I would imagine that it would be the equivalent of a D3DX Effects file.

Originally posted by Korval:
There’s nothing stopping nVidia/3DLabs/Apple from developing&implementing EXT_RT on top of 2.0. You can still have extensions, after all.

Yes, of course. But what about ATI and other hardware vendors? Ok, if there might be EXT_RT on NV/3DLabs/Apple you only can hope that ATI will implement it too.

i like that pdf that is linked in there. GREAT DESIGN! :smiley:

i espencially like the page where there is a HUGE

API

and nothing else really :smiley: but it fills the screen :smiley:

fun :smiley: great art.

Originally posted by Korval:
Nothing. 2.0 isn’t a revision, it’s just a new name for 1.6. They aren’t removing stuff or breaking backwards compatibility.
But isn’t vertex/fragment programs taken out of the core? If this is true then it is a revision…

Vertex and fragment programs were never core to begin with, just an ARB extension.

I cannot understand why anyone would ever want to use a high level shading language. I wouldn’t say it’s really any easier than VP and FP, you can still write quick and dirty code with VP and FP. It seems like a completely useless and meaningless abstraction, it doesn’t have any of the real advantages of a high level language. I am honestly puzzled by the motivation behind this. I am also quite ignorant due to lack of interest, perhaps someone can enlighten me?

Originally posted by Madoc:
I cannot understand why anyone would ever want to use a high level shading language. I wouldn’t say it’s really any easier than VP and FP, you can still write quick and dirty code with VP and FP. It seems like a completely useless and meaningless abstraction, it doesn’t have any of the real advantages of a high level language. I am honestly puzzled by the motivation behind this. I am also quite ignorant due to lack of interest, perhaps someone can enlighten me?
You use high level languages for function calls, complex control flow, scoping and linker-style functionality. All of these are quite nice to have for larger pieces of code.

But I also think that ARB_fp and ARB_vp are quite expressive for “low level” languages and I prefer using them right now.

Perhaps I’m more ignorant than I thought, but aren’t these functions strictly fictitious in SLs? The thought really gets on my nerves… grrr

Originally posted by Madoc:
It seems like a completely useless and meaningless abstraction, it doesn’t have any of the real advantages of a high level language. I am honestly puzzled by the motivation behind this. I am also quite ignorant due to lack of interest, perhaps someone can enlighten me?
Well, do you use x86 or the corresponding ISA on your platform for software development?

No? Then the same arguments not doing this apply to the GPU too. And with every GPU version the arguments become stronger.
In the long run GPUs will be an additional SIMD coprocessor array resp. vector coprocessor to your system.

Regarding the “March’04 meeting notes” topic: can anybody give additional informations to the shader language metafiles?
Byte code? effect file format? compatible to directx/cgfx? Which time frame?

Regarding functions etc. in High Level Languages. No they are not just for show, yes in a lot of cases it is faster to inline them (so compilers will do that), but future hardware will support dynamic branching (nvidia’s 6800 support dynamic loop branching at the very least).

True at the moment it is probably better to work in a low level language. However, compilers are improving and the better they get and the more complicated GPUs get, the bigger an advantage a compiler can have. The main nicety at the moment is the time for development. There are plenty of interviews with people saying how an optimized version of their high level shader came out to one instruction where their version of optimized low level came out to 1 instruction less and took them 7x as long.

I too am curious about this metafile thing. If it indeed was similar to effects, wouldn’t that belong in glu (unless they are trying to offer ISVs the option to optimize data changing/flow in those as well)? Either way I really look forward to it :slight_smile: .

Oh, and a question… Are they actually going to call this OpenGL 2.0? I think that would be a shame, because it really is GL 1.6 or something, just normal GL + a bunch of programmable stuff… oh for the nice and clean solution (still cleaner in some respects than DX :wink: )