GL_R11F_G11F_B10F and FrameBuffers

Can anyone tell me what happens when a FrameBuffer colour attachment has the internal format GL_R11F_G11F_B10F.
My shaders are generic and desinged to work with RGBA framebuffers and out put different values into the RGB and alpha channels (it’s a deferred shader and each component has a different meaning).

My question is: How does gl_FragData() with it’s 4 component RGBA output map to R11F_G11F_B10F. They are both 32-bit in total, but the format does not have an alpha channel so what happens in my shader when I set the alpha component?

actually, the output of your fragment shader is 128bit (4 float values). in that case the alpha is discarded, and the first three components are converted to the lower precision float format.

To be precise, the alpha component written by the shader will be passed on to rasterops, where it can potentially modify the RGB components (alpha test, alpha to coverage, blending.) Only when the pixel is finally written to the framebuffer will the alpha be discarded.

This is no different than using a GL_RGB8 framebuffer.