A couple of suggestions

Ok, here are a few suggestions for later revisions of the API. These are mostly things that i think are long overdue, or that could be handled in a better way.

1) All objects, for example buffers, textures etc. should be handled, initialized and configured without binding semantics.

A bit of clarification: I understand the optimizations achieved with first-bind setup, so it should remain an option with creation. However, the state of the buffers/textures should be handled on a per-object basis, not on a per-binding basis.

I suggest this change for flexibility, especially with texture sampling states, which are handled really inadequately in my opinion. This can be done relatively easily by changing the “target” arguments on various state functions to object names (IDs).

2) The default framebuffer should NOT exist at all.

Clarification: The concept of a unique, preset framebuffer that is required to render to the screen is inefficient for many effect techniques. A better approach would be to allow any framebuffer to set ONE of it’s color buffers, regardless if it is a texture or a renderbuffer, as the screen output. I am fairly confident that this can be handled more efficiently in the driver rather than having the application rebinding and blitting between framebuffers.

Of course, given this change, the default framebuffer could still exist, just as long as it is not the only method of screen rendering.

3) There should be better integration between the OpenGL and OpenCL pipelines, preferably by implementing OpenGL as a "subset" of OpenCL (could be in the driver only, just as long as there are no conflicts/flushes when switching APIs).

This could lead to some interesting ideas, such as something like the D3D “compute shaders”. Pretty useful stuff, it would be nice to see it done in a completely integrated manner.

If i am suggesting something impossible/stupid, please explain the issue. I do think, however, that these changes are possible and could lead to a more modern and powerful OpenGL.

The target argument is actually really useless in most cases now and I agree that first-bind setup is not the best solution, though the former does not introduce that much issues and the later is already addressed by the slowly integrated DSA semantics.

About texture sampling states, well that’s already part of sampler objects and not texture objects so that was a very bad example.

I agree that these issues have to be targeted but I don’t share your vision about how to do it.

The problem with totally removing the concept of default framebuffer is with the windowing systems. WGL, as an example, requires a default framebuffer AFAIK and I don’t see this being improved considering M$ does not have any reasons to improve their WGL API.

Again, I agree that the default framebuffer should be not handled that much differently than user created FBOs, as an example, it would be crucial to be able to use the default framebuffer attachments as textures or as attachments to user FBOs, but your approach is still not the best way to handle this issue.

OpenCL, at least in theory, is already something similar than D3D compute shaders and you can interop pretty good with OpenGL considering you can share resources and provide synchronization between OpenGL and OpenCL operations. The problem is rather with the implementations as AFAIK currently the synchronization between the APIs is not so efficient as they don’t use the same underlying scheduler that dispatches the workload to the GPU. This is something to be addressed by vendors, not by the specification.

I think having OpenGL stuff as part of OpenCL or the other way around would be not just redundant but also awful from design point of view. Personally, I don’t like this design decision in D3D. Why one needs to create a graphics context in order to perform general purpose computing on the GPU?

Your suggestions are good, but you should have rather focused on what you want to solve instead of how.

Yeah, other people agree with you, that’s why there’s extensions such as: http://www.opengl.org/registry/specs/EXT/direct_state_access.txt but integrating that kind of extension with OpenGL will be quite a major change and so far they’ve only done it for new objects rather than existing ones.

The wglCreateContextAttribsARB functions already supports this, however as I understand it from the issues listed in the extension document, the hDC passed to it is used to identify the driver, so if it were passed a DC relating to a NVidia card, then NVidia’s drivers would be used to create the context, if it were a DC relating to an ATi card, then ATi’s drivers would be used.

If you were to try to create a context without specifying the hDC, then you’d run into problems if NVidia’s drivers were used to create the context, then you started trying to draw to a DC owned by ATi drivers.

To get round this, a new extension WGL_EXT_platform could be added, which would have a few entry points for querying platform abilities etc.

int wglGetPlatformIDFromDC(HDC hDC);

// then either functions similar to what is provided in OpenCL:
int glGetPlatformIDs(uint num_entries, int *platforms, uint *num_platforms);
int glGetPlatformInfo(	int platform, enum param_name, sizei param_value_size, void *param_value, sizei *param_value_size_ret);

// or query platform info using existing functionality (glGetIntegerv/glGetString etc), and just define a couple of new enums:

// would also need a new attribute defined for xxxCreateContextAttribsARB

To create a context without a default framebuffer, you would get a platform id or provide some special value (eg. -1 or 0) if you are only ever going to be rendering off-screen, then plug this value in as a attribute to xxxCreateContextAttribsARB.

Querying platform:

// from existing DC
platform_id = wglGetPlatformIDFromDC(hdc);


// using OpenCL style queries
GLuint num_platforms;
GLuint platform_ids[16];
glGetPlatformIDs(0, NULL, &num_platforms);
glGetPlatformIDs(16, platform_ids, NULL);
int i;
for (i=0; i<num_platforms; i++)
  glGetPlatformInfo(i, GL_PLATFORM_NAME, 0, &platform.name_length);
  * make sure enough space allocated *
  glGetPlatformInfo(i, GL_PLATFORM_NAME, platform.name, &platform.name_length);
  if (suitable(platform))
platform_id = i;


// GL style queries
glGetIntegerv(GL_NUM_PLATFORMS, &num_platforms);
*make sure enough space allocated*
glGetIntegerv(GL_PLATFORMS, platform_ids);
int i;
for (i=0; i<num_platforms; i++)
  platform.name = glGetStringi(GL_PLATFORM_NAME, i);
  if (suitable(platform))
platform_id = i;

Then plugging this value into xxxCreateContextAttribsARB:

attribs[0] = {W}GL_PLATFORM;
attribs[1] = platform_id;
attribs[...] = ...;
attribs[n] = 0;
wglCreateContextAttribsARB(0, 0, attribs);

Would give you a context created on whatever platform you chose. You would however need an initial OpenGL context to query whether the extension exists in the first place, and some functions for querying devices similar to OpenCL’s might be handy too.

When I looked at how to interoperate between OpenCL + OpenGL I thought that perhaps if buffer objects had been extracted out of both specs, to create a base buffer object specification, it might have made interactions a bit neater, but I haven’t given it much further thought.

textures still have their sampling states. they are overridden by a sampler object only when you have one bound.

Normally the sampler states are best to go with textures because they have usage relation - most often one uses given texture with the same sampler states all the time.
It’s only in some very weird algorithms that one may need to use single texture with different sampling states at the same time. Drivers can do better job than the app at switching the sampler states if/when needed for it’s hardware.