Can't set glClearColor alpha to 0

I set the default color buffer values with:

glClearColor(0.0, 0.0, 0.0, 0.0);

If I then clear the framebuffer with glClear(GL_COLOR_BUFFER_BIT) and immediately read a pixel anywhere from the screen with glReadPixels(X, Y, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, @Pixel), the pixel’s alpha value is always 255 instead of zero.

The Pixel array is just an array[0…3] of bytes (this is Pascal, by the way). On Windows XP, I correctly get these values for the pixel array:

Pixel[0] = 0
Pixel[1] = 0
Pixel[2] = 0
Pixel[3] = 0

But in Ubuntu, the pixel values are different:

Pixel[0] = 0
Pixel[1] = 0
Pixel[2] = 0
Pixel[3] = 255

I.e. the alpha isn’t zero. I can’t figure out why. Help?

Well, reading up on the subject a bit, it seems you’re not always guaranteed to have destination alpha, so I’ve found a work-around so I don’t rely on it.

You can if you allocate it on the render target.

For instance if using GLUT:

  glutInitDisplayMode    ( GLUT_RGBA | GLUT_DEPTH | GLUT_STENCIL | ... );

Note the RGBA, not RGB. Or if creating a window via raw X in a visual chosen with glXChooseVisual, include these attributes:

  attribs [n++] = GLX_RGBA ;
  attribs [n++] = GLX_RED_SIZE   ; attribs [n++] = 8 ;
  attribs [n++] = GLX_GREEN_SIZE ; attribs [n++] = 8 ;
  attribs [n++] = GLX_BLUE_SIZE  ; attribs [n++] = 8 ;
  attribs [n++] = GLX_ALPHA_SIZE ; attribs [n++] = 8 ;

or if creating a window via raw X in a visual chosen with glXChooseFBConfig, include these attributes:

      GLX_X_RENDERABLE    , True,
      GLX_DRAWABLE_TYPE   , GLX_WINDOW_BIT,
      GLX_RENDER_TYPE     , GLX_RGBA_BIT,
      GLX_X_VISUAL_TYPE   , GLX_TRUE_COLOR,
      GLX_RED_SIZE        , 8,
      GLX_GREEN_SIZE      , 8,
      GLX_BLUE_SIZE       , 8,
      GLX_ALPHA_SIZE      , 8,

Then once you’ve got an RGBA framebuffer, all you need to do is clear it (including alpha channel) to the value you want:


  glClearColor( 0,0,0,0 );
  glClear( GL_COLOR_BUFFER_BIT );

or write to it with your drawing so that the alpha becomes what you want.

I’m already setting up a 32-bit render target with an alpha size of 8. And it works fine in Windows XP. But not in Ubuntu.

http://www.opengl.org/resources/faq/technical/transparency.htm

15.060 I want to use blending but can’t get destination alpha to work. Can I blend or create a transparency effect without destination alpha?

Many OpenGL devices don’t support destination alpha.

So destination alpha doesn’t seem to be guaranteed.

Did you tried to call glFinish before trying to get the pixel values, or to swap the buffer (if using double buffering) ? I think that the driver is not finishing the gl command, so explaining why you don’t get correct values.

Yes, I’m doing that. Besides, if that was the case, why would it work in Windows XP and not Ubuntu?

Even further reading has revealed that Linux generally doesn’t support 32-bit display settings. They’re generally 24-bit. If the display is 24-bit, wouldn’t that explain why I’m not getting alpha on the window render target?

This can have to different driver defaults, etc.
Do you compare the same hardware, same driver version, and just changes the OS ? Please provides details.

Even further reading has revealed that Linux generally doesn’t support 32-bit display settings. They’re generally 24-bit.

I strongly doubt that. Can you provide links, references ?
Even if alpha is not used, RGBX is way faster because pixels are aligned to 32 bits.
Are you sure you request a double buffered framebuffer ?
glutInitDisplayMode ( GLUT_RGBA | GLUT_DOUBLE | GLUT_DEPTH | GLUT_STENCIL | … );

I’m not sure the problem is about the “bit per pixel” thing, because you use gl function to get the values (so each value will be set in a byte - in your example -, after beeing correctly converted by GL). So whatever type you ask to get the pixel values, you’ll always have the good values stored in the buffer, and correctly “clamped” to the type you choose.

So, I think you’re doing something not in the right way, giving you “lucky” results under windows. Are you sure you read the pixels from the correct buffer (front or back) ?

PS: even if under Linux you stuck with 24 bit display, this is only for RGB, not true for RGBA under GL.

As someone else said, post a short GLUT test program and we’ll help you find the problem.

I can tell you that you can allocate RGBA8 framebuffers, both for the window framebuffer and offscreen FBOs, and I have rendered to alpha textures to be composited in a separate pass via both means, with proper results, and that means that the alpha was written to and preserved properly.

Yeah, if there was no alpha in the buffers, blending wouldn’t work so well, I think. I think it’s a rather exaggerated statement, that you can’t have RGBA buffers under Linux.

It seems I misunderstood the Linux 32-bit thing. It’s just that Linux 24-bit = Windows 32-bit.

Anyway, I’m programming in Pascal with SDL, so I’m not sure how helpful this code is, but I’ve whipped together a quick test program. It sets up the render context, clears the color buffer, reads a single pixel and writes the pixel data to a file, then swaps the buffer and does it again.


PROGRAM test;

USES gl, sdl;

VAR outputfile : text;
    pixel : ARRAY [0..3] OF byte;
    renderwindow : PSDL_Surface;

BEGIN
   SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
   SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
   SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
   SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
   SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);
   renderwindow := SDL_SetVideoMode(800, 600, 32, SDL_OPENGL);
   IF (renderwindow = NIL) THEN BEGIN
      SDL_Quit;
      EXIT;
   END;

   glEnable(GL_TEXTURE_2D);
   glClearColor(0.0, 0.0, 0.0, 0.0);
   glViewport(0, 0, 800, 600);
   glClear(GL_COLOR_BUFFER_BIT);
   glMatrixMode(GL_PROJECTION);
   glLoadIdentity();
   glOrtho(0.0, 800, 600, 0.0, -1.0, 1.0);
   glMatrixMode(GL_MODELVIEW);
   glLoadIdentity();
   glEnable(GL_BLEND);
   glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

   assign(outputfile, 'output.txt');
   rewrite(outputfile);

   glClear(GL_COLOR_BUFFER_BIT);

   pixel[0] := 0;
   pixel[1] := 0;
   pixel[2] := 0;
   pixel[3] := 0;
   glReadPixels(1, 1, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, @pixel);
   writeln(outputfile, pixel[0]);
   writeln(outputfile, pixel[1]);
   writeln(outputfile, pixel[2]);
   writeln(outputfile, pixel[3]);

   SDL_GL_SwapBuffers();

   glClear(GL_COLOR_BUFFER_BIT);

   pixel[0] := 0;
   pixel[1] := 0;
   pixel[2] := 0;
   pixel[3] := 0;
   glReadPixels(1, 1, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, @pixel);
   writeln(outputfile, pixel[0]);
   writeln(outputfile, pixel[1]);
   writeln(outputfile, pixel[2]);
   writeln(outputfile, pixel[3]);

   close(outputfile);

   SDL_FreeSurface(renderwindow);
   SDL_Quit;
END.

The results in my output.txt file is:

0
0
0
255
0
0
0
255

Meaning that both times it reads the pixel, the alpha is not set to zero, while the RGB values are. (The color values are read correctly if an image is drawn on the buffer before reading the pixel values.)

Try setting a fullscreen mode and see if that gives you alpha. Also, are you running your desktop in 16bpp?

yeah and details about your video chip…

Fullscreen doesn’t change anything and my desktop is running in 24bpp.

Video card:
Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (rev 0c)

Do you have “real” drivers for your card ?

I tried to check it. On my 32 bits nvidia linux computers, I have 24 bits X display, but I could got the alpha value. Plus reading the nvidia information about the driver (I know, nvidia and not intel), this conforted me from what I said one the previous post: 24 bits color under Linux is the “same” as 32 bits colors under Windows since most constructors will stuck with 8 bit color per channel even for having asked for 32 bits for 3 channels only. And my nvidia x server setting says me, I run onto 24 bit desktop, but all my buffer color channels are about 8 bits. And I could have alpha values.

Note: I could remember one of my programs running under Linux and having transparencies without glut set with GLUT_RGBA…

PS: Can you have transparency ? From what I read I’m not sure about it at all.

If not, it seems you’re stuck with 24 bits for all (RGBA).

Anyway, the most annoying thing is that you can’t get the alpha channel…