Multi-Textures

I’m trying to do texture mapping to geometry from multiple textures for a given geometry. I have concluded that Indexed Vertex Arrays (using glDrawElements) is the most efficient way to load my geometries. It seems to me that i can’t use this method because the only way i’ve found to do multiple textures on one geometry is to switch the texture binding using glBindTexture before drawing each vertex. Is this the only way to do it, to draw each vertex of my geometry out using glVertex, or can i use Indexed Vertex Arrays to map multiple textures to my geometry. Thanks for the help!

I am not sure if this will answer your question, but, did you look at this thread?

http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=2;t=017597

Hope this helps…

With straight multitexture you specify texture coordinates per vertex using the glClientActiveTexture call. But this only lets you specify texture coordinates for each vertex, not enable or disable a texture unit.

You are confusing multitexture and multiple textures. It seems like you want to transition between two textures on an object rather than simply use two textures simultaneously.

You can use a per vertex attribute and multitexture to blend between two texture units.

Using per vertex alpha with crossbar extensions and multitexture you could specify how much of one texture vs another you wanted per vertex.

Me6 - thanks, but that’s not exactly what i’m looking for.

Dorbie - You’re right i am confusing multitexturing with multiple textures. I DON’T want to use multiple Texture units to overlay two textures over the entirety of a model. Instead i have two textures and, for example, i want to apply half of texture A to the top of my geometry and half of texture B to the bottom of my geometry. Is there a way to do this without using the alpha channel of the texture?

As I said you can use per vertex color attributes and set up the texture environments accordingly. You should look at the use of the combiner and crossbar extensions.

http://oss.sgi.com/projects/ogl-sample/registry/ARB/texture_env_crossbar.txt
http://oss.sgi.com/projects/ogl-sample/registry/ARB/texture_env_combine.txt

The GL_INTERPOLATE_ARB texenv should interest you.

Dorbie -

I don’t know what a per vertex color attribute or per vertex alpha is or how to use them. Are you just talking about setting the current color using glColor? or the alpha channel of the actual texture? I did look into the combiner and crossbar and see how they work generally but don’t see how they can be used to do what i want. I also see how to set multiple UV arrays for each texture unit using the glClientActiveTexture functiohn. Could you elaborate on how to properly set up my texture environment to tell OpenGL which texture unit i want applied where? Thanks

you can specify a color-array, just like normal & position arrays, those are called per-vertex attributes.

then you could eg. give the upper vertices an alpha of 0 and the lower an alpha of 1

with combiners and crossbar you could bind your 2 textures, using an “interpolate” mode you would do

out = tex0 * (primarycolor.alpha) + tex1 * (1-primarycolor.alpha)

primarycolor would be your vertex colors

the other way is just drawing the mesh with 2 calls, like one time vertices 0-100 and the other time 100-… and change textures inbetween…

CrazyButcher - Thanks for the explanation. It sounds like using the per vertex color attribute only works with two textures? Is this right?

I thought about making several calls to glDrawElements and switching textures in between, but i thought that the vertices that were part of the triangles in both calls to glDrawElements would display undesired behaviour because the UVs in the TexCoordPointer would apply to only one of the textures but the vertex would be created with both textures. Is there a way to get around this and specify which texture to use when the same vertex is drawn using different textures but the same UVs? Thanks -

cannot really follow your description of the problem

the per vertex attributes as RGB Color and Alpha can be used by you however you want…

the UV coords are whatever you tell them to be, if you want to use 2 textures at the same time, ie single drawcall 2 textures bound using multitexturing, you would provide texcoords for both (normally the same coords)

if you want to switch textures between drawcalls, then it is the same as if you were rendering 2 different objects, you only need a single texture bound, and a single UV coord per vertex

Let me try to rephrase my concern. You have two triangles, 1 and 2 and 4 vertices, A, B, C and D. Triangle 1 has vertices A, B, and C and triangle 2 has vertices B, C, and D. Triangle 1 and vertices A, B, and C have UV coordinates that map to texture 1. Triangle 2’s vertex D has UV coordinates that map to texture 2. I would make two calls to glDrawElements, the first with indices specifying vertices A, B and C to draw triangle 1. For this call texure 1 would be bound. Then I would bind texture 2 and call glDrawlements again, specifying vertices B, C, and D to create triangle 2. In this case vertices B and C were created both in the first and second call to glDrawElements. In both cases the texCoords associated with vertices B and C were the same, but the texture bound was different, thus creating different texels. Thus, for vertices B and C, OpenGl was provided the texel values from the first texture in the first call to glDrawElements and the texel values from the second texture in the second call to glDrawElements.

Would OpenGL display vertices B and C with the texel values from texture 1 or 2?

How could i make sure that vertices B and C recieved the texels from texture 1?

Thanks for your help!

gl doesnt “reuse” anything between drawcalls, so when your uv coords were wrong, its simply that you provided them wrong.

you could make 2 texcoord arrays, one for tex0 and one for tex1, and change the glTexCoordPointer when you change texture binds between the drawcalls accordingly

if you mean the actual pixel at the position vertex B and C stand for, then that is not predictable I think. because vertices are infinetely “small” they dont stand for a surface, it’s the polys that get shaded accordingly to your display resolution.