Multipass-Multitexture ability

I am not sure if I can fully describe what I mean but the idea is simple:

Today nobody knows how many texture units will be available at the user’s computer.
Some older graphics card may have only two of them. So to implement really flexible
multipass multitexturing, we need to be able to start rendering triangle without

rasterizing color.
It would probably be the best to implement as Enable/Disable (GL_something), or with

function glSomething() that would take the argument. It could also be alpha value, not

only boolean but this may make it too complicated.

If you don’t understand what I mean, lets say that:

glEnable( GL_something ); // would be enabled as default
// here bind texture 0 to unit 0 and texture 1 to unit 1
glBegin(GL_TRIANGLES);
for (int i=0; i IS SMALLER THAN 3; ++i) BEGIN
glMultiTexCoord4fv( GL_TEXTURE0, texture_coord_0[i] );
glMultiTexCoord4fv( GL_TEXTURE1, texture_coord_1[i] );
glVertex4fv( vertex_coord[i] );
END
glEnd();

glDisable( GL_something ); // would be enabled as default
// here bind texture 2 to unit 0 and texture 3 to unit 1
glBegin(GL_TRIANGLES);
for (int i=0; i IS SMALLER THAN 3; ++i) BEGIN
glMultiTexCoord4fv( GL_TEXTURE0, texture_coord_2[i] );
glMultiTexCoord4fv( GL_TEXTURE1, texture_coord_3[i] );
glVertex4fv( vertex_coord[i] );
END
glEnd();

Would have been rasterized exactly as this would be:

glEnable( GL_something ); // would be enabled as default
// bind textures to according units
glBegin(GL_TRIANGLES);
for (int i=0; i IS SMALLER THAN 3; ++i) BEGIN
glMultiTexCoord4fv( GL_TEXTURE0, texture_coord_0[i] );
glMultiTexCoord4fv( GL_TEXTURE1, texture_coord_1[i] );
glMultiTexCoord4fv( GL_TEXTURE2, texture_coord_2[i] );
glMultiTexCoord4fv( GL_TEXTURE3, texture_coord_3[i] );
glVertex4fv( vertex_coord[i] );
END
glEnd();

I know that similar effect can be archieved with blending but this allows to get exactly

the same effect especially when texture combiners are used.

I suppose the GL_RASTERIZED_COLOR token or something like this.
I hope you understand. Perhaps this could be an extension. And I don’t think it would be
too difficult for driver developers to implement.

If you want to tell me something about this, mail to: Tringi at Mx-3 do cz.

So you are basically saying you want virtual texture units (the virtual ones just being multipassed)? This seems like something that really should be left out of the API and something that your graphics engine should do. You can query the number of texture units and adjust your pipeline accordingly.

The reason people use OpenGL instead of Direct3D (aside from portability), is it isn’t bloated with proprietary features like .x mesh format, pre-fab water effects, and virtual texture units.

No, not like this.
I want to be able to perform multipass multitexturing with all the advatages of the hardware multitexturing.

Well, lets say that I have only 2 texture units (I really do :slight_smile: ) but I need to perform effect with 3 combiners. Right now, there is now way to do this.

I think that this really isn’t proprietary feature. I would only make multipass multitexturing much easier, the current blending way is really limited and can emulate only a subset of multitexturing.

What is so diffictult on taking rasterized fragment as source for texturing? This could be implemented also just as another combiner operation, one single token?

When glslang was in the early design stages, 3DLabs (and Carmack, incidentally) believed strongly in the idea of virtualizing resources for programs to the degree that the user did not know or care about basic hardware like number of texture units, number of interpolants, number of per-vertex attributes, etc. Under such a system, a glslang implementation that couldn’t handle a shader in a single pass in hardware would be forced to multipass on it, and do so transparently to the user on a per-primitive level.

There are numerous performance issues with such a thing. For example, the user now has no idea if the shader is going to multipass, and they have no idea if the shader is going to run particularly well.

This is all well and good in an environment where performance is secondary, but high-end gaming isn’t such an environment. Games need to be able to use hardware fast, and they can’t rely on the user tweaking some slider whose meaning is either completely abstract (a simple quality slider) or far beyond his knowledge (actually turn on/off specific effects, possibly even in specific circumstances).

Also, look at modern glslang compilers. They’re servicable, but even after 6 months (or more. I forget how long it’s been), these compilers aren’t that great. They’ll get better, but writing one is difficult enough without having the added requirement of adding multipass logic (with render-to-texture for each pass, since you cannot just use blending and get the same effect. You have to bring the results into the fragment program itself. Plus you have to render into a 32-bit float buffer, since you need to preserve precision from one pass to the next) into a driver. That’s a good way to make sure that nobody even considers writing a GL driver. And without GL drivers, GL doesn’t exist.

Ultimately, glslang does not require this “feature” (though, thanks to the vertex/fragment program attachment, it is entirely possible for an implementation to do it).

@James Dolan

I don’t see any bloat in the D3D API. All stuff you talk about is in the D3DX library. Just a set of tools you can link to. It’s like saying due to the glut teapot, GL is a bunch of bloated stuff.
D3DX is not more than glut. It has some more features, but thats all.