replacing a single channel with glTexSubImage2D

hey there,
so i have a question about glTexSubImage2D. say, for example, i create an RGBA texture and initialize the whole thing to 0s (in all 4 channels). then, say i use glTexSubImage2D to replace JUST the red channel (by passing in GL_RED as the format). the data i pass in has some high values, but mostly 0s. then, i draw the texture over the entire screen, and what i see is that the screen is red where i specified, and it’s black everywhere else. this happens even if i have drawn lots of geometry behind this quad. since i have blending enabled (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), and the alpha value of the texture was initially all 0s, shouldn’t the texture not show up at all? it seems that by calling glTexSubImage2D, it is also changing my alpha channel so that it’s no longer all zeros. does this make sense? hmmm.

GL_RED is the incoming format. And this is what the manual says about the conversion to the internalFormat:

“Each element is a single red component. It is converted to floating point and assembled into an RGBA element by attaching 0.0 for green and blue, and 1.0 for alpha. Each component is then multiplied by the signed scale factor GL_c_SCALE, added to the signed bias GL_c_BIAS, and clamped to the range [0,1] (see glPixelTransfer).”

[This message has been edited by Relic (edited 07-02-2003).]

Quick guess glColorMask(1,0,0,0);
But that will not work for subimaging.

[This message has been edited by M/\dm/
(edited 07-02-2003).]

I thought the solution was obvious from the manual: Set the pixeltransfer scales and bias to zero for all components which should be zero. Probably not very fast.

hey, thanks guys. i guess the part i suspected, but wasn’t certain about, was that it was attaching 0 for green, 0 for blue, and 1 for alpha. it’s a bit unfortunate that those channels are overwritten, as by just changing the bias and scale parameters, there’s no way to retain your original blue/green/alpha channel values (unless they were all the same number, then you could do it with biasing). as for the use of glColorMask(1,0,0,0), you’re right, that won’t work for subimaging – it’ll only mask what’s written to the color buffer.

Fastest way would be to write the texture G,B to the region you’re subloading.

Dunno exactly how you’re making your data but perhaps you could draw R to framebuffer then GB from the texture then copytexsubimage2D the combined result to the texture. Ideally you are already doing a half or this and a simple textured quad to fill G,B will give you what you need.

There are other options, like multitexturing and combining the channels during rendering. So it might be worth restructuring and coming at the problem from the other direction, i.e. don’t have a single texture. It all depends on your broader goals.

My gut feeling tells me that maintaining a texture shadow in system memory, making the changes there and always uploading all channels is going to be much faster than playing with pixel transfer scale and bias.

Do you ‘create’ these changes on the host processor, or are they the result of some fancy render to texture stuff?

Originally posted by zeckensack:
My gut feeling tells me that maintaining a texture shadow in system memory, making the changes there and always uploading all channels is going to be much faster than playing with pixel transfer scale and bias.

Mine too.

My comment was based on the hope that the red component is based on the card already. Aside from this there is very little off card copying with my proposal. The main overhead (and it may not be an overhead) is the copying of the luminance data to the intermediate data space.

In general you could both be right it very much depends on the whys & wherefores of the single component data.

well, the actual process is rather convoluted. i first use the stencil buffer to render shadow volumes for my object. however, because my object is not a watertight mesh (it’s being generated on the fly, and thus is not perfectly sealed), you can see speckles in the shadow. to get around this, i’m copying the stencil buffer to the cpu, performing a gaussian blur on it, and then copying it back to the stencil buffer to render the (now solid) shadows. this is slow (as expected), so i thought rather than copying the data back to the stencil buffer i would just have a shadow texture that could be updated each frame. the result of the blurred stencil buffer would be the alpha channel, and the texture’s color would be black. then i could even get some pseudo soft shadows – due to the blurring. this worked fine (though it’s still slow), but in the process of debugging i decided to try copying the blurred data to the red channel instead of the alpha. that’s when i noticed the other channels were being overwritten. so in actuallity, i’m writing to the alpha channel of the texture – which works fine for me, because it always overwrites the other channels with RGB=0,0,0 (just what it should be for shadows). hmm… i hope that made sense