Statless texture specification and access

Default values is not a good idea. It is lazy programming, IMO.

All programming is lazy. The drive towards laziness is why programming languages other than assembly exist. It’s why scripting languages exist. It’s why garbage collection exists.

Having reasonable default parameters, particular for a function that effectively takes dozens of parameters, is entirely reasonable. It saves unnecessary keystrokes, and prevents a class of runtime errors (namely, using the wrong defaults).

What do you mean by “long” and 32 bit and 64 bit?

If memory serves correctly, and that is a BIG FREAKING IF, the keyword “long” meant a 32 bit integer under some 64-bit compilers and under others it meant a 64-bit integer.

A structure is a structure in C and C++. Ditto for VB and Java.

All a structure is, is a bunch of bytes right? the ugly is, the freaking padding. Admittedly, for a fixed hardware platform, how something should be padded is pretty fixed, but alas… I am paranoid. The main icky I have with a struct is that if one wants to extend it… though that is not at all possible for at any rate if not using structs to begin with :whistle: Also, if memory serves correctly, a long, long time ago, like when GL was at version 1.1, some C compilers did not let one pass structs in a function (though really at this day and age, this is a mute point entirely since we’ve got s spec for the C programming language that mandates letting pass structs to functions). At any rate, the tipping point from using several arguments of a struct is a matter of taste… for my matter of taste, I freely admit that the current jazz I wrote at the start of this post does not get there, but is awful close.

On another hand, if someone really wanted to use structs then one could make a macro or inlined functions that takes the values from a struct and makes the GL call.

Default values is not a good idea. It is lazy programming, IMO.

I can say I am on the fence on this, it just depends on the situation to me… often enough a fair number of parameters are going to be the same very often… so those are good candidates for default values (witness much of the glx and egl API for example).

Why would you care about “long” in OpenGL when it defies its own types?

Why would you care about “long” in OpenGL when it defies its own types?

You are right, as the GL header files have a system of macro magic to detect the right type for 64bit integers, and event at that there is no “GLLong” type, and the GLint types are to be 32-bit integers… the 64bit integer types have the label GLint64/GLuint64 and the GLsizei is (correctly) typedefed as ptrdiff_t, so, yes, epic idiot post on my part on the 64bit thing…

Naked structs are evil because they are neigh impossible to port across architectures, compilers, programming languages and operating systems. The issues are very real and impossible to solve in a general manner - which is why every programming manual worth its salt will explicitly warn against naked fields in structures.

The only sane solution would be to define a new object type with get/set functions to hold the necessary data:


GLTexObject* texture = glGenTextureObject(); // or a plain int for symmetry with older APIs
glSetTextureType(texture, GL_TEXTURE_2D);
glSetTextureWidth(texture, 256);
glSetTextureHeight(texture, 256);
...
glCreateTexture(texture);

This translates perfectly to most object-oriented languages in use today. E.g. C#:


var texture = new Texture
{
    Type = TextureType.Texture2d,
    Width = 256,
    Height = 256
};

I wholeheartedly agree. ALL object types should be opaque pointers. It’s not just object sizes and portability, but also future extension support, guaranteeing users can’t attempt to construct objects manually, and implementation flexibility.

I mean, NVIDIA’s idea of a texture object will probably look different than AMD’s, which will look different than Mesa 3D’s, which will look different than Intel’s, which will look different than Imagination Technology’s, etc. etc.

If you try to define the object as some “obvious” common fields then you’re basically forcing them to be near-useless proxy objects with a ton of overhead, and you’re basically right back to having the retarded GLint-based object ids except now you’re stuck with a single definition of the object’s properties forever. Using accessor functions gives the implementation the ability to structure and supplement the core object members however it needs to.

Opaque pointer types and accessor functions is the only way to go.

The only big issue with the opaque pointer types versus object ids is that in C there is no way to represent inheritance of object types. It would be great if you could have GLtexture* variables automatically convert to GLresource* variables so you could have a single set of ref/unref/lock/unlock/delete functions rather than needing GLTextureDelete, GLBufferDelete, etc. Especially for textures it’d be nice to have GLtexture2D that converts automatically to GLtexture. But that’s just not possible in C, and at best you need macros/functions that convert from one type to another, e.g. GL_TEXTURE2D_TO_RESOURCE() and the like. Or you just need to duplicate functions. However, since OpenGL already forces you to duplicate functions because it uses absolutely zero of the advantages that the GLint id system could have bought, it’s kind of a moot point. Get rid of the disadvantages of the GLint ids and just use opaque pointers. Please.

No, it isn’t.

This reminds me of the argument I once heard from an idiot who claimed that object inheritance was a useless feature because it was just language bloat to work around cut-n-paste.

The problem is, we’re far less concerned about “saving programmer time and effort” and far more concerned about “increasing code quality and efficiency.”

The crap the programmer has to type in, the more room for mistakes and bugs there are. The more code that is duplicated, the larger the code size of the application and the slower it runs on modern systems. The more duplication of logic, the more likelihood of duplicated bugs being fixed only in some places and not all of them. etc.

Same goes for techniques like garbage collection. The idea that garbage collection does nothing more than remove the need to manually manage memory is completely incorrect. It can be used in places where manual memory management is feasible, but it can also be used in places where manual memory management just isn’t possible, or is so error-prone as to be idiotic to attempt. Most applications written in C/C++ have some form of garbage collection anyway, although note that “automatic garbage collection” does not mean “mark and sweep garbage collector built into the language.” Reference counting smart pointers in C++ are an automatic garbage collection implementation, after all.

That said, I agree that defaults are good and make sense. I don’t like default arguments to functions in languages like C++, but then I don’t like functions that have enough arguments to need defaults. If we’re talking about construction objects and multi-function APIs for configuring values before construction, then all values that possibly can have defaults should have defaults.

Naked structs are evil because they are neigh impossible to port across architectures, compilers, programming languages and operating systems. The issues are very real and impossible to solve in a general manner - which is why every programming manual worth its salt will explicitly warn against naked fields in structures.

I’ve never seen this advice in a book before.

Oh, they’ll tell you not to write them to disk or send them over the Internet. But that’s the only portability problem with structs; using “naked fields” in a struct within a codebase is perfectly functional.

What you may be confusing is encapsulation, which has nothing to do with portability. It has to do with maintenance (which is at issue here). It allows you to change the representation of an object without changing its interface. You shouldn’t expose members of real objects because at some later date you might want to change those fields. And then everyone using them would be screwed.

Also, OpenGL has not been ported to non-C languages. Oh, the API has adapters for many languages, but they’re all just layers on top of the actual C API. There is no native Python OpenGL API; IHVs don’t have hooks for getting at a C# implementation of OpenGL.

If OpenGL had structs, it would work no differently. Other languages would simply find ways to adapt. JavaScript had to have new objects created for WebGL to be used as source arrays for buffer objects.

The only sane solution would be to define a new object type with get/set functions to hold the necessary data:

And now, finally, we’ve come full circle. Because this exact the sort of thing, these attribute objects, that was going to be a key feature of Longs Peak.

Before the ARB put a bullet into it.

Or, to put it another way, it’s not going to happen. The ARB will not rewrite the texture creation and allocation API. They’ve tried it twice, and both times it died.

Wanting it to happen will not change that. Writing large posts that detail specific ideas will not change that. Writing you own OpenGL 5.0 specification will not change that.

We all want a better API. But the ARB has made it plainly evident that we’re not getting one. All we will ever get are incremental improvements to functionality. The absolute most we might get is DSA, but even that’s pushing it.

The problem is, we’re far less concerned about “saving programmer time and effort” and far more concerned about “increasing code quality and efficiency.”

Except that we’re not. Code isn’t getting more efficient; it’s getting less efficient. Things like Java/C#, scripting languages, etc are all less efficient than C/C++. But people use them. Because it’s easier.

Why does “code quality” matter? Because someone has to maintain that code, and it is easier to maintain clean code that ugly, difficult-to-understand code. It’s easier to write and debug clean code as well.

In short: laziness: wanting to make things easier on ourselves.

Now perhaps you have a problem with the term “laziness,” equating it with a negative. But it is still accurate: we want to have to do as little work as possible, so we use the languages that allow us to do as little work as possible.

Reference counting smart pointers in C++ are an automatic garbage collection implementation, after all.

No, they really aren’t. I’ve never heard anyone equate reference-counted smart-pointers to actual garbage collection before. They are not the same thing.

Garbage collection is not a generic term for any automatic memory management system. It refers to a specific scheme for dealing with memory automatically, one that is not intrusive and generally does not require writing extra code (with the exception of possibly having weak references).

And now, finally, we’ve come full circle. Because this exact the sort of thing, these attribute objects, that was going to be a key feature of Longs Peak.

Before the ARB put a bullet into it.

Or, to put it another way, it’s not going to happen. The ARB will not rewrite the texture creation and allocation API. They’ve tried it twice, and both times it died.

Wanting it to happen will not change that. Writing large posts that detail specific ideas will not change that. Writing you own OpenGL 5.0 specification will not change that.

We all want a better API. But the ARB has made it plainly evident that we’re not getting one. All we will ever get are incremental improvements to functionality. The absolute most we might get is DSA, but even that’s pushing it.

OUCH. Might be true too, that is what makes the OUCH that much more painful.
Though the only ones that know what happened to LongPeaks is ARB/Khronos. Maybe
it was shot down because it was too much too soon, or maybe something else.
We can only guess :sick:

Oh well, I will still (eventually) clean the spec up… just so horribly busy now.
Sighs.

I wouldn’t take it so hard kRogue. There will always be two camps of developers. Those that are happy to flush the API down the toilet every few years in search of “the next great thing” (just because Microsoft says so) and those that are too busy adding useful features and perf enhancements for the latest GPUs to their company’s product lines to waste time/money/effort with that nonsense and the expensive rewrites and maintenance costs that result (not to mention orphaning customers on older hardware).

Direct3D is perfect for the former group. OpenGL is perfect for the latter.

For the latter to put up with a full API flush and restart, it has to offer something really revolutionary (different and compelling, worthy of a totally new model). Besides OpenCL/CUDA/Compute (which are their own new API), GPU tech has pretty much just been evolutionary since SM4/GL3.

(Re compelling: Don’t just tell me, but show me that this new model halves my frame times [or better] without a hardware change, letting me push a bunch more and/or more realistic content to our users, and you’ve got my attention. …Hmmm… reminds me of NVidia bindless…)

Why would users construct objects manually?

No they won’t look different. We aren’t going to access some object that are behind the scenes inside the driver. That’s not the point of having “struct” for textures. You missed the point entirely.

Overhead? Well, we certainly would not want our programs to run at 1 FPS just because we introduce the concepts of “struct”.
Accessor functions? Look at Alfonse’s post.

You are over complicating it.

Also, like I said in my post, we would have different versions of the texture struct if a new feature is to be introduced with a new GL version. See my older post for details.

As for defaults, they would disappear if struct is introduced. You would be forced to set up each member. IMO, it is a good thing since there isn’t a huge number of variables associated with a texture but on the other hand, there is a good number to of variables to warrant grouping them together and making a single call to a glCreateTexture function.

I have been actually involved with designing a C rendering API. When we started, first slides on LongsPeak came out, which introduced Templates plus Get/Set fucntions.
We embraced this mechanism and use it with great success.

Also, like I said in my post, we would have different versions of the texture struct if a new feature is to be introduced with a new GL version. See my older post for details.

A “versioned” struct would be pure hell. How do you mix and match extensions with versioned structs? Mind you, a specific version of the struct would refer to a specific (binary) layout. Each combination of mixed extensions would need its own struct version. The binary layout of the struct became a contract that both the driver and the application have to match, otherwise you get crashes. You certainly don’t want that.

Extensibility is of the the key features that opaque structs offer. In OpenGL all about is using extensions and mixing them, sometimes even runtime-configurable. Binary layouts of structs are not runtime-configurable.

The Get/Set mechanisms also allows the driver to check for valid values while you are filling the Template, which greatly helps with error detection.

As for defaults, they would disappear if struct is introduced. You would be forced to set up each member. IMO, it is a good thing since there isn’t a huge number of variables associated with a texture but on the other hand, there is a good number to of variables to warrant grouping them together and making a single call to a glCreateTexture function.

Defaults are A Good Thing. Don’t force the user to write needless boilerplate code just to fill in default values. An API needs to be easy to use but hard to misuse - you propose the other way around.

There will always be two camps of developers. Those that are happy to flush the API down the toilet every few years in search of “the next great thing” (just because Microsoft says so) and those that are too busy adding useful features and perf enhancements for the latest GPUs to their company’s product lines to waste time/money/effort with that nonsense and the expensive rewrites and maintenance costs that result (not to mention orphaning customers on older hardware).

This is a gross mischaracterization of reality.

First, code maintenance has “time/money/effort” costs too. And if your codebase is a 20-year-old pile of hacks built on top of an API that looks like it came out of someone’s colon, then throwing it out or just rewriting major parts of it will likely be cheaper in the long-run.

Second, Microsoft did not frequently change the API in D3D versions due to capriciousness. They did it out of necessity. Before the general stabilization of D3D 8/9, D3D was something of a mess. D3D 3.0 was utter garbage that Microsoft bought from the programing equivalent of a hobo living in an alley. D3D 5 actually looked like a rendering API, but it provided no mechanism for developers to take advantage of hardware T&L. They therefore had to alter the API to allow for that. Hence D3D 7 and the first vertex buffers. Shaders came along, so once again they had to make room for it. Thus D3D 8.

Since then, Microsoft has been fairly consistent with things. The 8-9 era lasted a rather long time, and the API differences were less changes and more additions. D3D 10 was a big change certainly, as they left behind all the legacy cruft and embraced uniform buffers wholeheartedly. But D3D 11 is just more, rather than different. Not just in terms of functionality, but API as well.

Third, the biggest problem with your statements is your binary thinking: either the API changes “every few years”, or it never changes. This is a strawman: a deliberate simplification of reality designed to promote one’s own viewpoint while simultaneously denigrating the opposition as being clearly deranged.

Asking for an API revision is not the same thing as asking for constant API revisions. These aren’t even in the same ballpark.

Wanting the API to be reasonably easy to understand is not the same thing as wanting to “flush the API down the toilet every few years”. All it would have taken is the success of one of the API revision proposals in OpenGL history. The original 3DLabs OpenGL 2.0. Or the Longs Peak proposal. Either one could have fixed this.

There would not have been a prolonged string of constant API breakages. There would be a single compatibility gap. And if we had used the original 3D Labs proposal (obvious improved and modified), that compatibility gap would have passed into irrelevance by now for pretty much everyone. In 3 years, nobody would care about the Longs Peak compatibility gap either, especially when DX11-class hardware can at this very moment be found on CPUs.

Or to put it another way, short-term thinking is short-term.

Which extension? Or should I ask, what’s new?
There is nothing new.
A 2D texture needs the following parameters:
width
height
format
border
and whether mipmaps (full mipmap chain or some range like 0 to 4)


//Assuming something new came along for GL 5.1 and 5.2
if(GL_VERSION_5_2 and ++)
{
  use_struct_version_3();
}
else if(GL_VERSION_5_1)
{
  use_struct_version_2();
}
else if(GL_VERSION_5_0)
{
  use_struct_version_1();
}
else
{
  use_old_style();
}

or perhaps


//Since I don't know that GL 5.1 and 5.2 exist
if(GL_VERSION_5_0 and ++)
{
  use_struct_version_1();
}
else
{
  use_old_style();
}

Yeah, version checks will be there.

As i see it the main advantage of using a struct is implicit immutability of object right after its creation.

With separate functions specifying each object we have what we have with textures in GL now. Now it wouldnt hurt much to have an entry point that would cause object to become immutable (and perform some sanity checks at the same time).
This i would see as ‘almost good’ (if the user doesnt provide data for texture right away).

Realistically speaking i think this is the most we can get out of GL - struct’s will never make it to the spec and i think that adding specialized texture creation function may have its problems too.

Dont know if theres much to win do drv side. If its smart enough it can probably avoid validating texture state too much, so the main difference is ease of api use, which will be hidden deep in application anywas.

/Assuming something new came along for GL 5.1 and 5.2
if(GL_VERSION_5_2 and ++)
{
  use_struct_version_3();
}
else if(GL_VERSION_5_1)
{
  use_struct_version_2();
}
else if(GL_VERSION_5_0)
{
  use_struct_version_1();
}
else
{
  use_old_style();
}

And this is only for core versions, now think of extensions:


if(GL_VERSION_5_2 and EXT_A)
{
  use_struct_version_3_extA();
}
if(GL_VERSION_5_2 and EXT_B)
{
  use_struct_version_3_extB();
}
if(GL_VERSION_5_2 and EXT_A_B)
{
  use_struct_version_3_extA_B();
}
if(GL_VERSION_5_2 and ++)
{
  use_struct_version_3();
}
else if(GL_VERSION_5_1)
{
  use_struct_version_2();
}
else if(GL_VERSION_5_0)
{
  use_struct_version_1();
}
else
{
  use_old_style();
}

see my point? And that is not even covering the code inside use_struct_version_X() (and all that default-value setting code).

Mind you, I am not only thinking about creating textures here, but all sorts of objects using this mechanism.