texture question

Hi,

I’m writing a program that uses texturing, specifically, I want the env mode to be GL_REPLACE. So I set it using glTexEnvi(…), however, when I render the scene it appears as if the incoming texture color is being multiplied by the glColor3f(…) color resulting in not correct results. I have numerous programs that use GL_REPLACE mode and they are all working correctly, so I’m wondering if anyone else has had this problem of GL_REPLACE not turning on?

Thanks, I can post code if anyone wants to see it.

Are you sure that you set the GL_REPLACE mode correctly (for correct texture unit and after the OGL has ben initialized) ?

hi,

i’m only using texture unit 0 so i shouldn’t need to specify it explicitly, and i’m setting it after all glut initialization calls but before the glutMainLoop() call, this is how I’ve done it in other projects and it works there. i’m sure it’s just an oversight on my behalf.

thanks

So try to call TexEnvi after binding this texture during the rendering code.

You can use the GLIntercept to record the OGL calls issued by your application so you may see what happens with the replace mode.

Originally posted by jide:
So try to call TexEnvi after binding this texture during the rendering code.
TexEnv modes are not related to textures. They are part of the texture unit state. You can call them anytime and you don’t even need to bind a texture to a unit.
GLintercept is a good idea.
Also, are you running this program on the same system as your other projects? Could indicate a driver issue. What’s your card + driver?

There’s need for binding the texture, otherwise, it applies to the previous bound one.

There’s need for binding the texture, otherwise, it applies to the previous bound one.
Bound texture is used for glTexParameter* family of functions. glTexEnv* functions operate on state of active texture unit regardless of if any texture bound.

Sorry if I’m wrong, but I can only read this in my book OpenGL 1.2 reference (quiete old I know):

For assigning texture information to texture units, the function glActiveTextureARB selects the active texture unit to modify. Then, calls to glTexImge, glTexParameter, glTexEnv, glTexGen, glBindTexture affect only the active texture.

But this doesn’t help in what we’re talking about. Regarding whether TexEnv and/or TexParameter depends on the bound texture or the texture unit, I absolutely have nothing.

What I know is that, for a single texture unit, with a scene with multiple textures, calling TexEnv after each texture bind before rendering some primitives works well for me.

hi,

i’m running this on ubuntu linux with an nvidia 6600gt (76.67 driver). my other projects that work correctly work on the same system and GL_REPLACE mode is working. i have tried to render geometry to the screen and copy the framebuffer to the texture and use that texture and the same problem occurs where it looks like the GL color is multiplied by the texel. the geometry renders correctly to the framebuffer, so the problem is either the copy or the application of the texture. it seems like the state is always GL_MODULATE or GL_DECAL. i have also tried to set the env variable right before rendering the texture mapped geometry.

glIntercept sounded like a good idea, but i’m doing this on linux and there didn’t look like there was a linux version yet. i could move it over to windows though and try that.

this is such a frustrating bug, thanks though.

For assigning texture information to texture units, the function glActiveTextureARB selects the active texture unit to modify. Then, calls to glTexImge, glTexParameter, glTexEnv, glTexGen, glBindTexture affect only the active texture.
That’s correct but I think you are confusing glActiveTextureARB (or glActiveTexture these days) with glBindTexture

Texture objects have a texture and it’s mipmaps associated with it. It also has the wrapping modes and texture filters. It also has the generate mipmap flag associated with it. There is also the LOD. Perhaps some other settings I’m forgetting.

TexEnv is bound to the texture unit, not the texture object.

If you want to program that way, it’s up to you.

brtnrdr,
sorry, no idea why it does that.

V-man, okay for the point you stippled. I pretty understand now that TexEnv applies to the texture unit not the texture object. I was effectively thinking it was applying to the bound texture, which was wrong. Thanks to my book.

However, I still don’t understand why I’m wrong in the case where there is a single texture unit and where there’s need for different texture functions for each texture object. To my point of view, this is still right. Or did I miss something again ?

brtnrdr, I don’t see why it’s wrong. Can you post some code ?

You are not necessarily wrong, but you can optimize.
Usually, it’s a good idea to not waste the GL drivers time. In some cases, it becomes very important to boost FPS like avoid glBindTexture, glUseProgram (for shaders).

if(previousTexEnv!=MyTexture[i].TexEnv)
{
   previousTexEnv=MyTexture[i].TexEnv;
   glTexEnv(....., previousTexEnv);
}

Perhaps you have MyTexture[0] to MyTexture[5] all with TexEnv == GL_MODULATE, so you avoid 5 calls to GL.

So, what do you mean exactly ? To switch from a texture unit to another instead of binding textures ? If that’s what you mean, I think I can agree with you. But however, this depends on many things, ie how is the program’s policy regarding the texture units.

No, the same texture unit.

struct MYTEXTURE
{
   uint TextureID;
   int  TexEnv;
}
....................

MYTEXTURE MyTexture[100];
...................

glActiveTexture(GL_TEXTURE0);

for(i=0; i<100; i++)
{
   glBindTexture(....., MyTexture[i].TextureID);

   //I want this ENV to be applied
  if(previousTexEnv!=MyTexture[i].TexEnv)
  {
    previousTexEnv=MyTexture[i].TexEnv;
    glTexEnv(....., previousTexEnv);
   }

   DrawMyModel();
}

If you are using multiple texture units, you can use the same optimization for each unit.

You can even do the “if” for TextureID

if(previousTextureID!=MyTexture[i].TextureID)
{
  previousTextureID=MyTexture[i].TextureID;
  glBindTexture(....., previousTextureID);
}

It depends on your project. If you have a general purpose renderer, you can avoid plenty of redundent GL calls.

I imagine everyone has a unique engine design, but everyone should try to avoid making redundent GL calls.

I imagine everyone has a unique engine design, but everyone should try to avoid making redundent GL calls.
The fastest call is call that is not done :slight_smile: however it is good idea to have a way to disable (fully or temporarily after some operation) that optimalizations for some states because there are bugs in some drivers.

EDIT: To be more specific. Good candidates for this disabling are glUseProgram and glBindProgram especially if access to fixed function state (e.g. gl_FogParameters) is used. In this case it is good idea to call the binding function before the rendering if the state has ben changed since last call to it because otherwise state in shader may be not updated. Depending on state in question I have seen this on both nVidia and ATI cards. On Volari cards (if I remember correctly the card brand) even the texture binds were not applied correctly to the fragment programs if the program binding function was not called after texture binding change. With some ATI drivers I had issues if redundant GL_MODULATE/GL_REPLACE modes were removed.

Yea, nvidia had that issue with glUseProgram and I’ve had shaders that don’t compile quite correctly on ATI. You need to let the company know about the bugs. I remember that on ATI, it would not change the texture filter state once I created the texture.

You need to let the company know about the bugs.
That would be ideal however when I hit some bug I ussualy need immediate workaround and I also wish that my program works even on drivers with that bug, if possible. If I find a simple workaround, I put in permanently in the program and fixing of that bug becomes low priority for me (it is selfish, I know) so I will report it once I do not have more important things to do.
Some time ago I send a email to the ATI mentioning some problems I had with the GLSL and offering that I will send them examples of the failing shaders. I got no reply nor delivery failure.

Here’s something interesting, so I rewrote my program and setting the texture environment to GL_REPLACE at initialization still does not work, however, if I set it in my rendering code it does work. This is strange because I never switch the active texture unit. I do however unbind the texture (bind 0), could this maybe reset the texture environment?

brtnrdr, isn’t that what I meant on my first reply here ?

I had such issues with nVidia cards, generally when using several textures, using a single texture unit or not.
And as other said (like V-man) TexEnv does not work on the bound texture. So setting them at initialization isn’t good because the driver will likely use the last filter you stippled. This is why setting it in the rendering loop and each time you need to change it is best, regardless the other bugs that could arise. So, the answer to your last question should be no.