Fading (Video Card Problem?)

I am fading a window to black in OpenGL simply by: (1) setting the color to black and (2) drawing a quad with the same dimensions as the window in a loop where I start the alpha value at zero and increase it to 1 in small steps. This approach worked great on ATI video cards. However, I noticed when I moved to a NVIDIA card, the resolution of alpha appeared to change. On the ATI card I started alpha at 0 and increased in steps of 0.01. This worked great. On the NVIDIA card using a step of 0.01 had now effect at all (the image didn’t fade to black as it did on the ATI card). If I bump the alpha steps up to a larger number (I think I had to use 0.1) it would fade, but it faded much faster than I wanted. Does anyone know what could be causing this? Is there a way around it?

Thanks,

Josh

You mean you draw a black rectangle with 1/100th alpha over and over again until all is black?
That is not working if the color resolution is coarser than this 1/100th, aka. if you were in highcolor on one board that one would not show an effect. True color would have worked.
You need to use a distinguishable value depending on the number of color bits per channel. In highcolor that would be 1/32th.
Add a timer if the fading has to take specific amount time or add a polygon stipple pattern for intermediate partial fades.

“You mean you draw a black rectangle with 1/100th alpha over and over again until all is black?”

  • Yes, that’s exactly what I’m doing.

Both machines are set to HighColor (32 bit), the only difference is that one machine has a NVIDIA card while the others have ATI. The code I have works on all of the ATI machines (5 different machines), but it doesn’t on the NVIDIA machine.

My issue isn’t necessarily a timing issue, its more of a resolution issue. I.E., the exact time it takes to fade is irrelevant, however, the way it looks while it fades is very important. Using 0.01 for the steps of alpha (i.e., glColor(0.0f,0.0f,0.0f,0.01) ) works really well, except that the NVIDIA machine doesn’t appear to want to recognize 0.01 as a valid alpha value. When using 0.01 on the NVIDIA machine, it appears to draw nothing, no matter how long I loop.

Thanks for the first reply!

With HighColor I meant 16 bits per pixel. 32 bits is TrueColor.
Ok, sounds like a driver problem then.
One way which would always work, even in high color, would be to grab the image you want to fade out and loop this:

  • clear the screen to the final color
  • draw that as an RGB textured quad with decreasing alpha in the color.
    Might be slower, but can be used for all other sorts of effects once you have the image as texture.

I have downloaded and installed the latest version of the NVIDIA driver.

Also, wouldn’t doing as you suggested still give me the same problem? I understand what you are suggesting, but it seems that I’m still going to have discrepencies between the ATI machines and the NVIDIA machines due to the fact that the NVIDIA machines refuse to recognize small values of alpha.

Thanks for the help!

No, you misunderstood.
I meant clear with black, draw the textured quad and let glColor4f modulating the texture go from (1.0, 1.0, 1.0, 1.0) white with opaque alpha down to (1.0, 1.0, 1.0, 0.0) fully transparent aka. black background.
Same effect, different implementation, no accumulation of small color values used in my proposal.

Right, I understood what you meant.

However, what values are you proposing that I use for alpha? Just 1.0 and 0.0? I would need to loop over them right?

Yes, this
“draw the textured quad and let glColor4f modulating the texture go from (1.0, 1.0, 1.0, 1.0) white with opaque alpha down to (1.0, 1.0, 1.0, 0.0)
means interpolate the white color’s alpha in as many steps as you want.

I completely understand what you are suggesting. But, therein lies my problem. I can’t get as “smooth” a fade as I want unless I use alpha values like 0.01. And values such as 0.01 are ignored by NVIDIA cards and work perefectly on ATI cards.

ATI and nVidia cards work differently.

Your method works for ATI card only. For nVidia card you should not increase the alpha value in the loop.

However, I don’t think the way ATI card work is the correct way. ATI and nVidia cards worked the same way until ATI released Catalyst driver, which over-corrected a bug in the previous version. This change causes some openGL-based applications work differently than before.

Every time when you start a draw session, ATI will clear the color buffer. Your previous drawing will be completely erased. Therefore, you cannot accumerate colors by repeat drawing. That is why you have to increase alpha in each step.

But nVidia card clear the color buffer only when you create a new OpenGL drawing context. Every time when you start a draw session, nVidia does not clear the color buffer. Your current drawing will be blended with existing color remained in the color buffer.

In early versions of ATI’s driver, the color buffer is never cleared, even when you create a new drawing context. Even when you try to clear the color buffer, the garbage remained in the color buffer may still show sometimes. This is apparently not correct. But ATI over-acted on this bug and hence disabled the color buffer accumerating effect, which is commonly used in effects like motion blur, etc.

However, whether the color buffer should be cleared or not is not specified in OpenGL spec, this leaves an open issue for a concrete implementation. I think the OpenGL spec should cover this point.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.