I’ve been tweaking for a month now and I’m sure an expert can tell me whats wrong at a quick glance. What Am I doing wrong? I have a PNG that has transparency, it loads correctly, I can clearly see the transparency when I inspect the memory. I’ve tried every possible combination of blending functions and texture environment setting, but it never gets displayed correctly, usually the transparent parts are just displayed as black. Attached is a minimal example. It’s an XCode project, but should compile and run on any platform as long as you link to OpenGL, GLUT, and libpng. I excluded the png from the zip to save space, any png with transparency should work.
See this line in your main function:
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, ...
Quick fix: change the “3” to a “4” (or GL_RGBA, whichever you prefer).
Deeper explanation: specifying “3” here means that you’re telling OpenGL to create a 3-component texture. Now - internally - there’s actually no such thing as a 3-component texture in OpenGL (read the documentation and the spec) so what OpenGL will do is take the R, G and B components and use them as they are, but will provide a default alpha of 255. Documentation and spec, again.
So you’re effectively discarding the alpha channel in the image and telling OpenGL to make the texture fully opaque. Specifying “4” instead (which is a slightly obsolete way of doing it, but probably a safe bet with gluBuild2DMipmaps) will tell OpenGL to use the alpha channel in your image.
Thanks, that fixed it