Switching to grayscale

I’m trying to find a fast method to switch from normal color rendering to grayscale, fast meaning fast grayscale rendering AND fast switching from color from/to grayscale. For the moment I only came up with a (lousy) method consisting of a glReadPixels with RGB of the color rendering and a glDrawPixels with GL_INTENSITY back to the frame buffer. Pretty inefficient rendering but fast switching, no ?

Another method would be to switch all the colors (including textures) into grayscale but that would be extremely slow for switching but fast for the rendering.

Any ideas ?

Might be faster to read back the framebuffer, do a greyscale conversion (the quickest, but not optically correct is to add the r, g and b values of each pixel together, divide by 3, and set the result back to r, g and b), and then copying the result back to the frame buffer. That saves you the conversion OpenGL does when copying with GL_INTENSITY, but you still have 2 copy operations which will definitely slow it down.

Depending on how many textures you have, using this method might make it reasonably fast to convert all textures to greyscale when switching. Do something like this:

void tex2grey(ubyte texture, int width, int height){
ubyte r, g, b, gr;
int pixels = width*height, count;

for(count=0; count<pixels; count++){
r = *texture;
g = *(texture+1);
b = *(texture+2);
gr = (r+g+b)/3;
*texture++ = gr;
*texture++ = gr;
*texture++ = gr;

Should work for GL_RGB8 textures (24bit) and should be reasonably quick

A possibility to do both rendering and switching really fast would be to hold all textures in color and greyscale versions and to switch between the different textures (and object colors of course) which will eat up a bunch of memory depending on how many textures you have.

Well I’ve tried my first method glReadPixels and glDrawPixels and, that’s not such a surprise, it completely thrashed my frame rate :slight_smile: ! So that is completely out of the question, as is the accumulation buffer on my GeForce… So any post-processing method that involves moving around frame buffer data is out of the question.

I’m begging for a hardware accelerated implementation of the OpenGL 1.2 Imaging Subset !!!

Ummm, looks like a job for the NV_vertex_program extension, when NVIDIA makes it available.

Taking the average of R,G,B is not the correct way to do greyscale conversion. R,G, and B have different contributions to intensity levels. G has the most, R the 2nd most and B the least. I forget the exact numbers, but it’s something like

Color = R0.3 + G0.6 + B*0.1

Something like that. Look up the exact #'s in the Colorspace FAQ.


Here are a couple ideas that might work, but I haven’t tried either one, so no guarantees

Set up a texturemap the size of your screen and set it’s format to GL_LUMINANCE. Instead of using glReadPixels(), which copies the data to main memory, you then use glCopyTexSubImage2D() to copy your rendered scene from the color buffer to this texturemap. The conversion and any memory movement should be done on the graphics card, which should speed it up a lot compared to ReadPixels. Finally, render one big quad onto your screen with this texturemap on it.


I am just now learning about the register combiner extension from Nvidia. I don’t know yet how flexible it is, but I do know that it has all of the color information available for you at the pixel level just before it is drawn to the screen. It MAY be possible (anyone?) to mix them into a single grayscale color at this stage of the pipe. If this could work, I think it would be by far your best bet.

Hope some of this helps,

You could use multiple sets of paletted textures, one for the color view and one for the grayscale view. This requires everything in your scene to be textured. Your OpenGL also needs to support the paletted texture extension.

I’ve tried it with a texture slightly bigger than the window and with glCopyTexSubImage2D like you’ve suggested and its damn slower than a glReadPixels / glDrawPixels !

I was wondering : games like Quake let the user adjust the luminance and contrast of the display on the fly. Any idea of how this is done, if it works only in fullscreen mode and if their is a saturation control (I’m dreaming ) ?

About register combiners : Yes I think it is the best solution but I’d like to make the app I’m working on work on the greatest number of 3D chips. For example, I don’t think the Radeon supports the GL_NV_register_combiners extension but it has the EXT_env_dotproduct3 which looks nice. Looks like I’ll have to code a specific method for each card…