Commentary on the ARB/OpenGL Newsletter

I’d imagine it’d be pretty simple to create Python bindings for OpenGL extensions with boost::python too. It definitely made embedding and extending our app relatively easy.

IMHO it should be the responsibility of the scripting language itself to provide GL bindings.

As you said, providing bindings for a single language is no problem. So it should be no problem for the developers of scripting language XY to provide GL bindings if they want their language to be usable for 3D graphics.

On the other hand, the ARB (or Khronos) would have a lot of work providing and maintaining bindings for many different scripting languages.

Originally posted by Overmind:
[b] IMHO it should be the responsibility of the scripting language itself to provide GL bindings.

As you said, providing bindings for a single language is no problem. So it should be no problem for the developers of scripting language XY to provide GL bindings if they want their language to be usable for 3D graphics.

On the other hand, the ARB (or Khronos) would have a lot of work providing and maintaining bindings for many different scripting languages. [/b]
Its always good to push the responsibility to someone else… The developers of the various languages have plenty work themselves without taking on wrapping all APIs out there.

Khronus member too have plenty of work to do just on their core API’s and implementations.

So who should be responsible? Well if Khronus wants an SDK that supports the developer eco system then supporting non C lanaguages is one place to start. However, I don’t believe the SDK should be sole responsibility of the Khronus members, we the OpenGL developer community should help out. Developing the SDK as an open source project would fit this bill.

One benefit of using other languages in the mix is testing, its real easy to write and maintain test scripts.

Note, I say this as a once die hard C++ advocate.

If you take for example Python, you’ll see that it provides a lot of bindings for native c libraries. They are supplied either by the core Python package, or by some independant project, but not by the c library itself. Why should it be different for OpenGL?

Of course it would be convenient to just put the load on Khronos, but I’d rather see them invest their time in more important things, and leave the trivial things to someone else. If the Python commutity wants OpenGL, let the Python commutity provide bindings.

Btw. there are actually Python GL bindings, and I’m sure there are a lot of GL bindings for other scripting languages. I really don’t see the need for an “official” binding supported by OpenGL, bindings supported by the respective language are more than enough.

For those of us not being able to come to siggraph, it would be nice to have a webcast of the OpenGL BOF.

Originally posted by Overmind:
If you take for example Python, you’ll see that it provides a lot of bindings for native c libraries. They are supplied either by the core Python package, or by some independant project, but not by the c library itself. Why should it be different for OpenGL?
I for one would like to alpha releases of OpenGL3.0 out early, following the release early release often that works well for Open Source projects.

For outside projects it would difficult to track this, and one shouldn’t expect them. However, if one automates the support it, as its easy to do, then its not a great burden to run an script to build the binding and then publish them, this could be done by Khronus or those closesly following the OpenGL3.0 development.

I believe the SDK should encompass this, perhaps even have the scripts for building the wrappers. Again the SDK shouldn’t just be Khronous members developing, it should encourage wider contributions, its should be an open source project.

If Khronous wants to promote OpenGL and their other API’s then it needs to work in the end user terms, it should serve and work closely with the developer community. C and C++ developers are now in the minority, one has to sell to users of the other langauges too.

I am willing to pitch in and help out with an OpenGL SDK and to test out OpenGL 3.0, I’m sure others would be more than willing to. I don’t have a huge amount of free play left in my schudule but with important things you just have to go the extra mile. With my own project, the OpenSceneGraph, being a pure OpenGL based library, the success of the OpenSceneGraph in the future depends upon the success of OpenGL.

Robert.

The GLSL object model was a relatively incremental change over the legacy model and did not change enough to justify carrying the two models. The GL3 model is dramatically different.
Something I meant to ask about this. Exactly what is it about the old glslang object model that is different from the GL 3.0 model? What does the new one do that the old one couldn’t?

The GameDev article mentioned these four points:

*  Object ID is a pointer instead of an uint (typedef void* GLobject)
* Object IDs are always generated by implementation
* Object manipulators take an explicit parameter (no more binding)
* Objects are immutable

All of these were served by the old glslang object model.

I think the key points here are:

  1. Everything will be in an object, not a mix between state based and object based.
  2. The draw commands will have an object parameter, too. There will be no “current” object for drawing.
  1. Everything will be in an object, not a mix between state based and object based.
  2. The draw commands will have an object parameter, too. There will be no “current” object for drawing.
    That doesn’t explain the need for a new model; that’s more about how the API uses the object model. You could just as easily use GLHandleARB objects for such APIs as this new GLObject. Indeed, you could use the old texture-object model with such an API (not that I’m suggesting that it would be a good idea). The two concepts are separate from one another.

Originally posted by Korval:
[b] The GameDev article mentioned these four points:

[quote]
* Object ID is a pointer instead of an uint (typedef void* GLobject)
* Object IDs are always generated by implementation
* Object manipulators take an explicit parameter (no more binding)
* Objects are immutable

All of these were served by the old glslang object model. [/b][/QUOTE]Your claim is inaccurate.

GLSL “handles” are uint, which means they cannot be opaque pointers (think 64-bit).

GLSL objects may be modified arbitrarily. A key property of the new object model is immutability.

The GLSL object model did nothing to modify the sharing mechanism.

GLSL “handles” are uint, which means they cannot be opaque pointers (think 64-bit).
They were? I could have sworn they were void*'s. Or maybe I was just thinking that they should have been…

Oh well.

Speaking of 64-bit, how is OpenGL going to handle that? I mean, GLInts are ints, which are defined by ANSI C to be the native word size, but defined by Microsoft to be 32-bits (for good reason, mind you). Is OpenGL 3 or some new GL header or something going to have some way of specifying a 32-bit integer?

GLSL objects may be modified arbitrarily. A key property of the new object model is immutability.
What does it mean for the objects to be immutable?

The GLSL object model did nothing to modify the sharing mechanism.
Fair enough. Does the new object model make some effort to differentiate between shareable objects and non-shareable objects without having to look up which is which (like having unshared objects be of type GLUnsharedObject rather than GLObject)?

Originally posted by Korval:
Speaking of 64-bit, how is OpenGL going to handle that? I mean, GLInts are ints, which are defined by ANSI C to be the native word size, but defined by Microsoft to be 32-bits (for good reason, mind you). Is OpenGL 3 or some new GL header or something going to have some way of specifying a 32-bit integer?

Table 2.2 of the OpenGL 2.0 specification defines GLint as a minimum of 32 bits. An implementation is free to use a larger native size, but in practice I imagine most vendors will define types to match the specified precision. Microsoft’s gl.h defines GLint as int, so as long as the compiler considers int 32-bit, that’s what it will be on Windows.

GLint is likely to remain 32-bit for the forseeable future.

By the way, the EXT_timer_query extension introduces new types: GLint64EXT and GLuint64EXT. These are guaranteed to be 64-bit minimum.

What does it mean for the objects to be immutable?
It means the structural properties of the object may not be modified after creation. For example, the current mode allows you to redefine the dimensions of an existing texture object by calling TexImage or CopyTexImage. There is no good reason to allow this and its a burden on the implementation, so we’re going to disallow this in the new model.

Fair enough. Does the new object model make some effort to differentiate between shareable objects and non-shareable objects without having to look up which is which (like having unshared objects be of type GLUnsharedObject rather than GLObject)?
No - sharability is a per-object property. You cannot distinguish between a share and non-shared object without querying this property, or remembering that you set it in the first place. No objects are shared by default, you must specify this immutable property when you create the object.

hi,
at first i have to say how much i like the new newsletter idea. i hope it holds be the quarterly timescale.

even if there was nothing totally new in there it is good to see that OpenGL is alive and many things are worked on.

to the concern with the new object model. sure it will be a big extension, but i have to bring my thoughts again about doing a split development of the api. a ‘pure’ OpenGL 3.0 LM profile development with the OpenGL 3.0 standard profile (OpenGL 2.x) development in parellel. maybe skip out the parts that would be to hard to make backwards compatible. this way it could be made sure that no compatibility reasons skrew up the new API.

so now keep up the good work OpenGL … ARB… or Khronos … :wink:

No objects are shared by default, you must specify this immutable property when you create the object.
I see. The GameDev article said that large objects (textures, buffer objects, etc) would be shared but small objects wouldn’t. I’m glad to see you just decided to let us decide at creation time if they would be shared.

Is there an expected performance penalty or memory overhead to shared objects? Like shared object (because the user asked them to be shared) have mutex’s on them that allow them to be safely used in multithreading cases, so you should only share when you mean it.

Originally posted by Korval:
Is there an expected performance penalty or memory overhead to shared objects? Like shared object (because the user asked them to be shared) have mutex’s on them that allow them to be safely used in multithreading cases, so you should only share when you mean it.
This is an implementation detail but there are at least two possible costs to sharing objects:

  • Mutex overhead
  • Keeping object state in sync if sharing requires multiple internal copies

The need to share objects is relatively infrequent but everyone has to pay for the fact that objects could be shared. We like this model where the sharing granularity is per-object because it gives you the flexibility you require while allowing us to avoid the overhead when no sharing is required.

I’d like to see some sample programs written in this proposed API.

Patience, Grasshopper. :slight_smile:

gold, is the spec going to attempt to define the exact behavior of shared objecs (beyond the idea that ‘they can be shared’). My understanding is that the existing spec mostly leaves this open to the implementation.

Specific behavior questions, assuming that ctx1 edited a shared object, and ctx2 wants to use it:

  • What synchronization is required on ctx1? I assume the common answer is glFinish, although this isn’t true on all platforms today.

  • What operations, if any, may be required on ctx2 for the changes to become visible?

  • Will it be unsafe to modify an object in ctx1 while simultaneously using it from another context (beyond the specific order of execution being undefined). i.e., will the implementation have to internally synchronize on every read/write, or can this be considered a potentially fatal App Error, to preserve performance?