OpenGL 3 announced

Format objects describe internal formats and the corresponding capabilities, including dimensional limits, allowable usage (e.g. texture, render buffer) and other caps such as filtering and blending.
Will these new format objects allow for reinterpretation of type, like say from D32 to R32, RGBA16X to RGBA16F, or other such “casts”?

The GLSL updates look sweet.

I just hope that, at least on windows, there will be a new (standard) non-microsoft controlled library that contains all the new exports and that can just be installed (and updated with newer functions) instead of having to get every single >= 3.0 function via a GetProcAddress.

Even if it’s not directly in ‘Longs Peak’, I think it would be strange to have the ‘clean new api’ that only runs on DX10 class hardware (Mount Evans) intermixed with all the old legacy stuff.

Prehaps you can optionallly pass an array, and NULL to use the indicies in the VAO?
That would be a really ugly solution, especially the “optionally” part…

That would be a really ugly solution, especially the “optionally” part…
I imagine that the way it works is just like for glDrawElements now.

If the VAO has an index array bound, then the “sizeiptr” is an offset into that index array to start from. So you don’t pass NULL so much as 0. If the VAO has no index array bound, then it reads the indices from the client, and the value is a pointer.

{edit}

I just noticed something. Where are the objects that store the scissor and viewport boxes? Or is that in the context?

Originally posted by elFarto:
in/out/inout, much clearer than attribute/varying

Hmmm, I’m not sure I like this. Currently the varying declaration is the same for both vertex and fragment shaders. So you can quickly copy’n’paste between the shaders, or better yet, like I just implemented in my framework, declare varyings in a separate shared section in my shader file so I don’t need to type it twice. With the new syntax it would no longer be possible to share, plus I can’t imagine the number of copy’n’paste errors will go up quite a lot. You forget to change “in” to “out” and vice versa when copying between vertex and fragment shaders.

Another thing I noticed:

common blocks - uniform buffers
common myPerContextData {
uniform mat4 MVP;
uniform mat3 MVIT;
uniform vec4 LightPos[3];
// ONLY uniforms, but…
// no samplers
// no int types
// no bool types
};

I understand the “no samplers” part, but what’s up with the “no int types” and “no bool types”. Are we not supposed to be able to store integer and bool uniforms in a uniform buffer? Either I’m misunderstanding this or the ARB has done a huge mistake.

Originally posted by Korval:
[QUOTE]I just noticed something. Where are the objects that store the scissor and viewport boxes? Or is that in the context?
I would expect it to work like in DX10, and thus be in the context.

Are we not supposed to be able to store integer and bool uniforms in a uniform buffer? Either I’m misunderstanding this or the ARB has done a huge mistake.
I understand this.

The presumption with buffer uniforms is that the driver can just copy them directly into the place where they go when the program object gets used.

However, for hardware that does not directly support integers and bools, there would need to be a translation step to convert integers/bools into floats. Since part of the point of uniform buffers is to make uploading fast, there’s no point in allowing uniform buffers to do this.

Well, until Mt. Evans, which will relax this restriction.

Basically, you’ll have to do all the int/bool-to-float conversion yourself.

Korval is right, and this is the price of supporting GPU’s that predate the GeForce 8800.

Originally posted by Korval:
However, for hardware that does not directly support integers and bools, there would need to be a translation step to convert integers/bools into floats. Since part of the point of uniform buffers is to make uploading fast, there’s no point in allowing uniform buffers to do this.

I thought this was supported since SM 2 in VS
and SM 3 can do in VS and FS

So what’s the difference between dx10 and ogl3? Will there be an advantage (besides cross platform) of using ogl3? Will it still have extensions?

Jeremy

So what’s the difference between dx10 and ogl3?
DX10 exposes more features than 3.0. Indeed, 3.0 doesn’t expose features at all; it’s just an API change. Stated otherwise, anything you can do in 3.0 you could do in 2.1.

Mt Evans is where DX10 features will show up.

Will there be an advantage (besides cross platform) of using ogl3?
What, isn’t cross platform enough?

Will it still have extensions?
Almost assuredly.

So what’s the difference between dx10 and ogl3? Will there be an advantage (besides cross platform) of using ogl3? Will it still have extensions?

Jeremy

Sorry, to clarify: what besides being a cross platform architecture would be an incentive for someone to choose ogl over d3d? Will ogl3 feature things like hardware accelerated which d3d9/10 do not offer?

Originally posted by Korval:
[QUOTE]Stated otherwise, anything you can do in 3.0 you could do in 2.1.

You can do render-to-VBO directly in 2.1?

what besides being a cross platform architecture would be an incentive for someone to choose ogl over d3d?
Like I said, isn’t that enough?

There’s also the fact that it works on all versions of Windows without having to deal with the DX9 limit on XP and the DX10-only Vista.

My feeling is that if you have to ask, you probably wouldn’t understand the answer. That is to say, if you have to ask how much the Ferrari is, you probably can’t afford it.

Originally posted by Korval:
[b] [quote]what besides being a cross platform architecture would be an incentive for someone to choose ogl over d3d?
Like I said, isn’t that enough?

There’s also the fact that it works on all versions of Windows without having to deal with the DX9 limit on XP and the DX10-only Vista. [/b][/QUOTE]Well in the past some of the reasons have been:

  1. Cross platform
  2. Backwards compatible
  3. Extensions (nifty new features)
  4. HW accelerated lines (d3d9 does not have this)

So with OpenGL 3.0 I’m only seeing 1 as an answer (maybe 3 as well?).

Originally posted by jkolb:
So what’s the difference between dx10 and ogl3? Will there be an advantage (besides cross platform) of using ogl3? Will it still have extensions?

The hardware is still being driven by Direct X. As long as that is true, OpenGL improvements are basically exposing DirectX functionality to OpenGL. Unless OpenGL becomes the driving force behind the hardware, it will not have major features that DirectX does not, except for cross platform. Which is more than enough.

OpenGL is still backwards compatible. I believe the new OpenGL 3 stuff will require the program request a OpenGL 3 context at creation, while it can still request a old-fashion OpenGL context, and maintain OpenGL 2.1 features.

OpenGL will always have extentions. They are just too useful drop.

Certainly 3 as well. It’s one of the biggest advantages of OpenGL.

For the next generation of hardware (and with this I mean the next after DX10/Mt.Evans class hardware), there will certainly be extensions. With DX10, it just wouldn’t be possible to use these features, you’d have to wait for the next revision of DX.

And with cross-platform recently also meaning Win2k/XP support, this point gains in importance, too :wink:

And while being only a minor point, I don’t think they will drop hardware accelerated lines in GL3…

The hardware is still being driven by Direct X.
I don’t think that’s actually true. It’s just that the latest hardware features were exposed by DX10 sooner than by DX9. But it was still DX that exposed features of the hardware, not the other way round :wink:

The GF8 was out way before DX10 was released…

it will not have major features that DirectX does not
This will never be the case. The only thing either API can hope for is having the features sooner, but you can pretty much count on the fact that any hardware feature that’s ever going to exist is going to be usable with both APIs eventually.