It will be good to have an extension that binds texture states to a name, similar to vertex array objects, but for texture states.

It would allow to bind min filter, mag filter, anisotropy level, etc. to a name for later usage



When you create a texture, you have an handle that is associed with it
=> you can easily make a tableau that make the association between this number and a string name and/or a more complex struct that save the states of this texture for example.

Because if you want to see a day a basic thing such as this to be integrated into the GL core, I think that you can wait a to long time … :slight_smile: for nothing … :frowning:

Personnaly, I have a day dream about an OpenGL 2.x that really use an object oriented scheme but that remain cross plateform independant …

It’s why, now in end 2009 and after lots of years, I see for to make my proper API that really use and propose a true Object Oriented scheme (but that use the “old good but really speed Opengl 1.x API” in his internal implementation, for compatibility and platerform independance of course)

Only because OpenGL 2.x is becoming all that you want but certainly not a cross plateform and independent API as before …


By independent, I think about a lot of society that really work together on the same idea => make the better, simpler and stable Open Graphics Language that is possible.


For example, when an extension is proposed, that a software emulation is systematicaly provided with it for to to be allowed to name it glSomething(EXT or not) seem to me the minimum (for that this can easily to be added to the GL core if this is a good thing …)



I give you two functions in one or two days : one for create/save the state on a texture with a name and another for to load/query the state of one texture by it’s name.



I just read all “4” of your contiguous replies, and I didn’t understand a bit of what you’re trying to say. Your meaning is getting lost in the language barrier. When you reply, please try to reply “once”, and sum-up what your point is in a sentence or two.

You do know you can edit your replies for a while, right?

Yes, but I haven’t always the possibility of make it :frowning:

Now for example, I have only Reply, Quote; Quick Reply, Quick Note, Notify and Email Post … but see nothing for to modify/compress my 4 replies to only one (you have perfectly reason, it’s better to have only one response that a lot of responses where each does’t say a lot of things).

(After the post, I have the Edit button, but not always. I have changed this post for add this parenthesis for example). (And sorry for my very bad english but I haven’t found the equivalent thread but that speak in french …)

What I have wanted to say is that we can effectively easily bind textures states by handles generated by glGenTextures … but this method isn’t the more logic/easy that we can think.

If we have the possibility to really name a texture with a real name such as “drapeau” or “ma texture a moi”, this is very is more usable for all the world that a simpler number that systematicaly change in the next loading.

All people on this world have a name and a firstname and they aren’t numbers.
If this work as this from a very very long time, it’s certainly not for nothing …

With textures handles that are really names, we can more easily share texture between independant OpenGL process and computers, exactely as we can easily share a file by his name on a network for example.

The other thing that I have say is that now for me OpenGL is really very very too slow to integrate something into the core. For me, this is only because two or three society make the war with the hardware before to begin to work about something that is generic and make in software, and so that can more easily to be implemented after in hardware by others OpenGL hardware competitors.

Direct3D make this really fast … but this API is really a nightmare and no cross platform on the other side :slight_smile:

What I want to find is the beautiful simplicity of OpenGL 1.x but with the latest and better technics for handle vertex arrays, multiples framebuffers and shaders.

And this is certainly not a lot of incompatibles versions of EXTs that can help OpenGL to have an unified API … :frowning:

PS : AdrianPi, I have make little functions that can handle texture states loading/saving by names (and not by number). If you have an email adress, send it to me for that i can transmit this to you


Yann, what you’re saying is OFF TOPIC and you totally misunderstood Adrian’s proposal.

If we have the possibility to really name a texture with a real name such as “drapeau” or “ma texture a moi”, this is very is more usable for all the world that a simpler number that systematicaly change in the next loading.

When Adrian said “texture names”, he was talking about the texture handles. That’s just an alternate term for the same thing. He was not talking about using strings.

That’s also why the link I provided went to a thread where this had already been proposed. And, according to NVIDIA at least, it is being considered by the ARB for the next revision of the spec.

A bit more on-topic: exactly what state from the texture object should not be separated into a different object? I was thinking that the specific state on which mipmaps were active on a texture. Then again, it might be useful for ease of LOD-ing to have different state objects. However, it occurs to me that separating the mipmap state would only ever be important if you wanted different mipmap settings in different places within the same scene.

It just seems to me that the mipmap state is something that is intrinsic to the texture, like it’s size and internal format.

Note that I’m not talking about the filtering modes. I mean the state that says which mipmap levels are active.

Ok, if I understand it’s something like to have new functions that have the active texture unit directely specified in texture parameters functions, and so doesn’t have to call glActiveTexture and make a rebind for each texture unit when we want to set/query multiples textures parameters in multiples textures units, it’s this ?

On other side, what about a texture handle (that is not a constant) that can be easily shared between multiples computers/instances (cf. one computer make alls the texture computations, and a lot of others computers can reuse this texture without to have to remake exactely the same thing) ?

This can be something that speed the things when for example we compute a relatively complex video/animation and where a farm of 50 computers can each compute only one frame per second (but share a lot of texture infos/datas with others) for a total output of 50 fps on the output of the farm.


it’s this ?

No. This is about separating the filtering state of a texture object from the texture object itself. So a texture object is just the images and a few settings. While the filter object would have the filtering modes, anisotropic, LOD bias, etc. That way, you could have a texture use one filtering mode in one place and use a different one in another.

On other side, what about a texture handle (that is not a constant) that can be easily shared between multiples computers/instances

Texture objects can already be shared between OpenGL contexts. Where those contexts are and what they mean is entirely irrelevant to the OpenGL specification.

What you’re talking about is an implementation detail. Someone would be writing an OpenGL implementation that works across multiple computers, and you would use it just like a normal OpenGL implementation. That it happens to be spread across multiple computers is something the OpenGL spec is not aware of.

I have now clearly understand the dissociation between the image and the filtering modes.

I found me too that it’s a good thing that filtering modes are dissociated because the same texture can be used in various computer that haven’t alls the same computing power (so, I can play with the texture quality/filtering for lowering the thing with computers that aren’t really very speed such as eeePC).

My first tests with fragment and vertex shaders have a lot helped me to understand precisely this (I have always some minors problems for understand who are exactely mixed/blended/filtered textures with the use of extern values binded to uniforms in fragment/vertex shaders, but I don’t think that this give me a long time before to understand how this work precisely) .

Thanks very much for your very usefulls informations that help me a lot Alfonse.

I work with GLX on Linux, so OpenGL’s images and textures cannot to be easily shared between computers via the X11 protocol ?

The thing that I work in this time is a “big” computer that can decompress a maximum of video files and serve it via multicasting to a variable number of others more smalls computers that use X11 for display/mix/blend a various number of these audio/video streams into one or more OpenGL windows (for to make the “compressed” yuv to the uncompressed abgr conversion in the last moment and have the possibility to display multiples streams each into one the 6 faces of a cube that can to spin).

For this instant, the server/slave part isn’t implemented but I want that the slave part have the less power computing needed as possible (for to can adapt this with pocket pc, iphone, PSP or Archos for exemple, but certainly with the loose of the X11 support :frowning: ).


Are you sure you understood? What is this talk about different computers?

Perhaps I can show an example.
Here is the current way OpenGL works.
glBindTexture(GL_TEXTURE_2D, thething);

That is nice but what if I want to bind “thething” to texture unit 0 and texture unit 1 and also on unit 0 I want mipmapping and on unit 1 I don’t want mipmapping.

What if on unit 0, I want anisotropy to be 16.0 and on unit 1 I don’t want anisotropy for “thething”.

What if, I want the depth compared mode to be OFF for unit 0 and for unit 1, I want it to be ON.

With Direct3D, you can do all that.
With GL today, you have to duplicate the texture.

Yes, we have perhaps more “freedom” with Direct3D
But with ten more lines of source code to change at each DirecX release :slight_smile:
(for at the end, to have something that see more or less to the original OpenGL code …)

And GL programs can be extremely easily recompiled with the same code source (only some minor modifications into the makefile) into various platform such as Macintosh on MacOS and EEEPC on Linux for example => this is a real good thing …

About the computers, I want something that can work on a lot of “simples littles workstations” such as eeepcs but where all the computing power is make on a “big” computer (such as we can have with an X11 displaying on a small workstation of a program that work on a big UN*X server for example)

In fact, I want to handle mixing of multiples video textures (and this on each of the 6 faces of a 3D cube) and want that this can be shared on multiples eeepc 701 platforms …

This is certainly a dream … but I love to dream :slight_smile:
(and sometimes, but ok with a lot of time for to do it, I realize my dreams …)

On another side, the decoding of a video is something like 10% with libavcodec and SDL on a recent PC or an iMac and a OpenGL displaying of a rotated textured cube is about 1% on my iMac => I really think that this can be easily adapted for to work directely on the EEEPC platform as this … but I want always better and faster for to dream to make it on the pocket pc platform :slight_smile: