multitexturing+blending - does it work together?

Hello all,

I have this problem on which I have spent many hours already.

I do bump mapping using multitexturing (parallaxmapping mentioned here some time ago…) but my colour texture is produced using blending operations (interpolation of several texture images). The both bump mapping(using multitexturing) and interpolation (using blending) work nice when run separately but when I try to put then together the interpolation doesn’t work.
Is it my mistake or it can’t work principally?

Please do you have any idea or hint?

Thank you for your time!

You probably need to do it in two passes (which I assume you have already tried). Do the Multitexture pass first, then do the bump mapping pass using additive blending (glBlendFunc(GL_ONE, GL_ONE)).

This should give you what you require.

Have you written your vertex and fragment programs such that alapha is sent through from the vertex values (or wherever you expect it from)?

Maye sure interpolated fragment alpha is getting output correctly and it should work, there’s really no reason this can’t be made to work.

Multiple passes with transparency can cause more problems than it solves so try the simple approach first.

thanks all,
but I’m not sure about this Alpha. To show you my problem more clearly, here is piece if my code (actually display() function):

void display()

  static float angle0=0.5f;

    angle0 += 0.01111f;

  float position0[] = {sin(alpha0), cos(alpha0),1};
  glLightfv(GL_LIGHT0, GL_POSITION, position0);

  glBindTexture(GL_TEXTURE_2D, bumpTex[0]);
  glBindTexture(GL_TEXTURE_2D, bumpTex[1]);

  // parallax mapping
      glBindProgramARB(GL_VERTEX_PROGRAM_ARB, vpid);
      glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, fpid);


  // here interpolation between different images using using blending is done 





Finally I need to attach result of texture interpolation (produced by blending) to texture unit 0 (but how to do it???) and run following standard fragment program for parallax mapping. If I do it as above the parallax mapping works well but interpolation not :-(. If I set “bump=0” interpolation work well.

And may I ask you:
the result of the fragment program is written directly into framebuffer or to texture memory?

Thank you again for patience, I’m quite desperate already …

Fragment program:


PARAM light0color = state.light[0].diffuse;
PARAM light1color = state.light[1].diffuse;
PARAM ambient = state.lightmodel.ambient;

TEMP eyevects;
TEMP rgb, normal, height, temp, bump, total;
TEMP light0tsvec, light1tsvec;
TEMP newtexcoord;

# normalize tangent space eye vector
DP3 temp, fragment.texcoord[3], fragment.texcoord[3];
RSQ temp, temp.x;
MUL eyevects, fragment.texcoord[3], temp;

# calculate offset and new texture coordinate
TEX height, fragment.texcoord[0], texture[2], 2D;
MAD height, height, 0.04, -0.02;  # scale and bias
MAD newtexcoord, height, eyevects, fragment.texcoord[0];

# get texture data
TEX rgb, newtexcoord, texture[0], 2D;
TEX normal, newtexcoord, texture[1], 2D;

# remove scale and bias from the normal map
MAD normal, normal, 2.0, -1.0;

# normalize the normal map
DP3 temp, normal, normal;
RSQ temp, temp.r;
MUL normal, normal, temp;

# normalize the light0 vector
DP3 temp, fragment.texcoord[1], fragment.texcoord[1];
RSQ temp, temp.x;
MUL light0tsvec, fragment.texcoord[1], temp;

# normal dot lightdir
DP3 bump, normal, light0tsvec;

# add light0 color
MUL_SAT total, bump, light0color;

# normalize the light1 vector

# add ambient lighting
ADD_SAT total, total, ambient;

# multiply by regular texture map color
MUL_SAT result.color, rgb, total;


Your question makes it seem like you’re just not sure about what a fragment program actually is.

I suggest you read the specification for fragment programs. They replace the texture environments (and fragment ops like color sum). They do not replace texture reads, texture updates, framebuffer blending, or anything else.

Basically, the “incoming fragment color” flowing through a number of texture environments, and being summed with the “secondary color” at the end gets replaced by a program that you write. The output of your program gets written to the frame buffer. Your program can choose to sample from various texture samplers, using various texture coordinate sets or calculated texture coordinates, and you can also use other parameters, including un-used texture coordinate sets which can pass arbitrary interpolated values from the vertex program to the fragment program.

If you want a fragment program that does X, then does Y, then just write a longer program that does first X, then Y. The inputs are still the same, and the output is still the same (the fragment for blending into the framebuffer).

In brief:

Vertex data -> [vertex program transformation] -> vertex program outputs -> interpolated across a triangle -> fragment program inputs -> [fragment program] -> output color -> framebuffer blending -> framebuffer memory.

really thanks jwatte,

actually I can’t write longer shader program because of HW limitations. I have 4 texture units (TUs) on my graph. card. For parallax bump mapping I need 2 of them and next 9 TUs I would need for interpolation. Therefor I do interpolation by blending only. But finally I have done it using two passes. In the first the interpolation using blending is done and final result is blended with bump mapp result. It takes some time but it works :slight_smile: already