Give alpha to an object OpenGL ES

I am new on OpenGl, this is my first post so if anything is wrong please correct me, what I want to achieve is to give a texture the alpha from 1.0 to 0.0 ( OpenGL ES 1.0)

I have been searching and only found “how to load alpha in images” but I cant find how to apply alpha to an object

I has been tried with:

    alpha += 0.002f;
    if(alpha > 1) alpha = 1f;
    gl.glAlphaFunc(GL10.GL_EQUAL, alpha);

But it doesn’t work.

How I can give alpha value (to make the effect of fade) to an object/texture?

This is my class

public class Palabra {

public float posX = 0f;
public float posY = 0f;

public float scaleX = 2f;
public float scaleY = 2f;

public float alpha= 0.5f;

public State estado;

private FloatBuffer vertexBuffer, texBuffer; // Buffer for vertex-array

private float[] vertices = { // Vertices for a face
0.0f, 0.0f, 0.2f, // 0. left-bottom-front
2.1f, 0.0f, 0.2f, // 1. right-bottom-front
0.0f, 0.35f, 0.2f, // 2. left-top-front
2.1f, 0.35f, 0.2f // 3. right-top-front

float[] texCoords = { // Texture coords for the above face
0.00f, 0.0f, 1.0f, // A. left-bottom
1.00f, 0.0f, 1.0f, // B. right-bottom
0.00f, 0.2f, 1.0f, // C. left-top
1.00f, 0.2f, 1.0f // D. right-top

public enum State {

public Palabra(int a) {
    // Setup vertex-array buffer. Vertices in float. An float has 4 bytes
    ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4);
    vbb.order(ByteOrder.nativeOrder()); // Use native byte order
    vertexBuffer = vbb.asFloatBuffer(); // Convert from byte to float
    vertexBuffer.put(vertices); // Copy data into buffer
    vertexBuffer.position(0); // Rewind

    // Setup texture-coords-array buffer, in float. An float has 4 bytes
    // (NEW)
     ByteBuffer tbb = ByteBuffer.allocateDirect(texCoords.length * 4);
     texBuffer = tbb.asFloatBuffer();
     posX = (float) 0.00f;      
         case 0:
             posY = (float) 1f;
         case 1:
             posY = (float) 1.5f;
         case 2:
             posY = (float) 2f;
         case 3:
             posY = (float) 2.5f;
         case 4:
             posY = (float) 3f;

public void draw(GL10 gl)
    gl.glTranslatef(posX, posY, 0f);
    gl.glScalef(scaleX, scaleY, 0f);
    scaleX -= 0.02f;
    scaleY -= 0.02f;
    posX += 0.02f;
    if(scaleX < 1) 
        scaleX = 1f;
        posX -= 0.02f;
    if(scaleY < 1) scaleY = 1f;     


        case AWESOME:
            gl.glTranslatef(0.0f, 0.2f, 0f);
        case GREAT:
            gl.glTranslatef(0.0f, 0.4f, 0f);
        case NORMAL:
            gl.glTranslatef(0.0f, 0.6f, 0f);
        case MISS:
            gl.glTranslatef(0.0f, 0.8f, 0f);
        case PERFECT:



    gl.glFrontFace(GL10.GL_CCW); // Front face in counter-clockwise
                                    // orientation
    gl.glEnable(GL10.GL_CULL_FACE); // Enable cull face
    gl.glCullFace(GL10.GL_BACK); // Cull the back face (don't display)

    gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
    gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); // Enable
                                                            // texture-coords-array
                                                            // (NEW)
    gl.glTexCoordPointer(3, GL10.GL_FLOAT, 0, texBuffer); // Define
                                                            // texture-coords
                            // buffer (NEW)


    // front
    gl.glBindTexture(GL10.GL_TEXTURE_2D, TextureLoader.palabrasIDs[0]);

    gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);

    gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); // Disable
                                                            // texture-coords-array
                                                            // (NEW)

I’m not very familiar to OpenGL ES®, so I don’t know the differences. I will assume that most of the stuff works like in desktop GL.

Your fragment shader has to output a color value with and alpha channel. In case you somehow don’t use shaders, either your texture
has to have an alpha channel, or maybe you already saw a demo program where a color value is assigned per vertex that you can add
an alpha channel to.

This is definitely NOT what you want. Alpha testing discards fragments depending on their alpha value.

What you want to do to setup alpha blending is something like this:


Thanks for your reply, It is very similar to the desktop, I am using OpenGL ES 1.0 (Which doesnt support shaders until 2.0)

I solved it with

          gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
	  gl.glColor4f(1f, 1f, 1f,alpha);
  alpha -= 0.002f;
    if(alpha < 0) alpha = 0f;

that make the trick , but I dont understand how this exactly works , I am giving that color to every pixel ?

The alpha value is part of the output color for each pixel. If you use a texture as in your code snippet above, every fragment gets a color value from the texture (including alpha, if your texture
has an alpha channel). In your code, where you use glColor4f, you assign a color value with an alpha value to all vertices. The color value gets interpolated across the triangles and combined
with the color from the texture.

glEnable( GL_BLEND ) enables blending so that color values of new pixels are combined with the onse in the frame buffer instead of overwriting them.

glBlendFunc specifies how the pixel colors are to be combined. The first argument is the source factor. GL_SRC_ALPHA means that the source color is mutliplied with the source alpha
value. The second factor is the destination factor. GL_ONE_MINUS_SRC_ALPHA means that the destination color value is multiplied with the inverse (one minus) source alpha value.

Alpha test is for testing fragment alpha values against an alpha reference value. It is separate functionality that is deprecated in OpenGL ES 2.0 (not even deprecated in fact, it’s gone). Instead use discard in your shaders, use discard last and test alpha against a fixed value or uniform to leverage fixed function hardware(YMMV). Blend func describes how color from your shader output is combined with color from the pixel. The arguments supplied should always be tokens that specify simple multipliers using constants ZERO, ONE, color, alpha, 1-color or 1-alpha (from source (shader) or destination (framebuffer) ).
glEnable for each of these are quite different, alpha test is not the same enable as alpha blend. Shader discard requires no enable.