Odd Texture

I’ve made a map in openGL, and one of the features I want to add is to have a pre-programmed camera angle which makes it look like you’re looking through an aeroplane window. Something lke this:

Obviously instead of viewing the sky you will see my map! Is there anyway I could use a similar image as a texture basically with a hole in so the map can be seen through it?

I have only ever used textures on a flat 2D square. I’m currently loading images and creating the texture using DevIL.

Thanks for any advice.

You could set the alpha value of the transparent portion of the window to 0, and then set up a fragment shader to discard fragments with 0 alpha.

This would allow you to do two renders----the first with the window image, and the second with the map. If you arrange the depth buffer so that the window is “closer” than the map, then the map will only be drawn where the window fragments were discarded.

Of course, you could use glBlendFunc to achieve the same result without bringing shaders into the equation. But shaders are more flexible, so that’s how I’d do it.

Yes, of course you can do that.

You need a mask, that tells OpenGL which parts of the texture are supposed to be “solid” and which to be “see through”. Then you need to set up the alpha-test properly (google for that, there are plenty of tutorials about it).

Then you can render a polygon with your texture and the mask in the alpha-channel and OpenGL will discard the pixels, that you don’t want to block the view.

The functions you are looking for are:
glEnable (GL_ALPHA_TEST); // to enable the test
glAlphaFunc (); // to configure the test, see some tutorial about the details

Hope that helps you,
Jan.

Damn, you were just a bit faster.

What Lindley said is correct. However, you do not need to use shaders at all. If you have never used them before it is easier to do it with the alpha-test, that’s two lines of code to add (and a modification of your texture). Of course, if you already use shaders, it’s probably straight-forward (and more powerful/flexible) to use them right away.

Jan.

I think what your talking about is something called portals.
It’s possible to do with two passes using only standard opengl 1.1 suff.

  1. render the map normally
  2. render the portal surface to the stencil.
  3. while using the stencil mask render whats on the other side, remember to use a clip plane to remove everything behind the camera and to keep the second camera in sync with your main one.
    4.cover the portal edges with whatever effect you like using alpha blending.

Thanks everyone. I think I like the sound of Jan’s suggestion as it seems to be the most simple. I’ve now created my texture which I want to use, it consists of a border with the center cut out (which is in the shape of a rectangle), can anyone point me in the direction of some sample code that can implement this using the alpha-channel as I can only seem to find complex examples and I cant really get my head around it.

Thanks again.

Well I found out that you can apply the alpha channel to the image outside of the program, this would make things a lot easier for me so I have created the alpha channel in photoshop.

I’ve then loaded the image using devIL and applied it as a texture. This is the code I have used:


void loadImage(){
	
	glEnable(GL_TEXTURE_2D); //Enable Texture Mapping
	ilInit();
	
	//Load world texture
	ilGenImages(1, &imageName);					 //Generate image name 
	ilBindImage(imageName);						 //Bind image name 
	ilLoadImage((const ILstring)"E:\BlankWindow256.bmp"); 
	ilConvertImage(IL_RGB, IL_UNSIGNED_BYTE);    //Convert image into RGB colour format and check its in unsigned byte format

	glGenTextures(1, &textureName);			
	glBindTexture(GL_TEXTURE_2D, textureName); //Bind texture name 
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); //Use linear interpolation magnification an minifying filters 
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);   
	glTexImage2D(GL_TEXTURE_2D, 0, ilGetInteger(IL_IMAGE_BPP), ilGetInteger(IL_IMAGE_WIDTH),
		ilGetInteger(IL_IMAGE_HEIGHT), 0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE,
		ilGetData()); //Define all texture parameters


	glEnable(GL_ALPHA_TEST);
	glAlphaFunc(GL_GREATER, 0);

	glColor4f(1.0, 1.0, 1.0, 1.0);
	glBindTexture(GL_TEXTURE_2D, textureName);

	glBegin(GL_QUADS);
	glTexCoord2f(0.0, 0.0); glVertex3f(200, -280, -350); 
	glTexCoord2f(0.0, 1.0); glVertex3f(200, -130, -350); 
	glTexCoord2f(1.0, 1.0); glVertex3f(340, -130, -350); 
	glTexCoord2f(1.0, 0.0); glVertex3f(340, -280, -350);
	glEnd();

	glDisable(GL_TEXTURE_2D);
	glDisable(GL_ALPHA_TEST);
		
}

This displays the image on screen as a texture as expected but no part of it is transparent. I’ve tried making the file a 32bit .bmp, a .tga and a .png but I get the same result with either of these so I don’t think the problem lies the devIL library, I think it’s either my code above has something wrong with it, or I havent created the alpha channel correctly.

I’ve also tried various different parameters in the glAlphaFunc() but still nothing. I either get the entire image opaque, or the entire image is transparent so I cant see it at all.

Here’s a screenshot of the image in photoshop if it helps:
http://img209.imageshack.us/img209/3521/photoshopsnapqw5.jpg

Any ideas?
Thanks very much

I’m getting there but I’m stuck again. Instead of using glAlphaFunc() I’m using glBlendFunc() after reading a thread from someone with a similar problem.

My code is:


glEnable(GL_BLEND);

        glBlendFunc(GL_SRC_ALPHA,1.0);

	glBindTexture(GL_TEXTURE_2D, textureName);
	glColor3f(1.0f,1.0f,1.0f);

	glBegin(GL_QUADS);
	glTexCoord2f(0.0, 0.0); glVertex3f(200, -280, -350); 
	glTexCoord2f(0.0, 1.0); glVertex3f(200, -130, -350); 
	glTexCoord2f(1.0, 1.0); glVertex3f(340, -130, -350); 
	glTexCoord2f(1.0, 0.0); glVertex3f(340, -280, -350);
	glEnd();

	glDisable(GL_TEXTURE_2D);
	glDisable(GL_BLEND);


This code makes part of it transparent but it’s not the part I want, it makes the window frame partly transparent rather than the window! Does this mean my code is wrong, or the way I have created the alpha-channel is wrong?

Also, the window frame is slightly transparent, not completely see-through, why is this?

Thanks for any advice.

Try
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
or
glBlendFunc(GL_ONE_MINUS_SRC_ALPHA,GL_SRC_ALPHA);
if your alpha channel is inverted

I’ve tried that as well Nico but that has no effect, the only combination of glBlenFunc that makes anything transparent is
glBlendFunc(GL_DST_ALPHA, 1.0); and glBlendFunc(GL_SRC_ALPHA, 1.0); Both of these make the window frame transparent but not the window itself, as you can see in this screenshot:

My window is in the bottom right my openGL screen, you can just about see that the window is still plain white but the window frame is translucent. Also in the image above are the two different images i’m using in my program. In both I have tried to add the alhpa channel, but I’ve done it differently in both images; one makes the alpha channel cover the frame and the other covers the window but both give the same result :confused:

Any ideas whats going on?

  1. Create the texture so that alpha = 1.0f (= 255 in unsigned byte) for the windowframe and alpha = 0 for the window.

  2. Draw the scene.

  3. Set the texture environment to replace (or modulate with white color)
    glTexEnvi(GL_TEX_ENV, GL_TEX_ENV_MODE, GL_REPLACE)

  4. glEnable(GL_BLEND) with glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);

  5. Draw the window texture.

Thanks agin NiCo but its still not working. I’m beginning to think that it might be the actual image rather than the code now but I’ve tried so many formats and options I dont know what else to do. Is there anything else I can try?

From the code it looks that you load a windows bitmap. Most of the times windows BMP files are RGB only. That means in your texture all alpha data would be replaced with 1.0 (unsigned byte 0xFF).

You must use a picture file format which contains all four channels or this is never going to work. Try the simple TGA file format with 32-bit uncompressed data which is really simple to load. It even has the bitmap origin the same way as OpenGL by default.

And you’re using an image library to load that, so you might not know what happens in there under the hood.
Have a look at the texture image’s fields before you convert it with ilConvertImage() to see what format you actually loaded. Look for an alpha channel!

If this function
ilConvertImage(IL_RGB, IL_UNSIGNED_BYTE); //Convert image into RGB colour format and check its in unsigned byte format
actually does what the comment says, check if there is a IL_RGBA option and use that instead.

Check if in this call the parameters to glTexImage2D actually contain alpha:
glTexImage2D(GL_TEXTURE_2D, 0, ilGetInteger(IL_IMAGE_BPP), ilGetInteger(IL_IMAGE_WIDTH), ilGetInteger(IL_IMAGE_HEIGHT), 0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE, ilGetData()); //Define all texture parameters

That is ilGetInteger(IL_IMAGE_BPP) must be 4, better would be you simply use GL_RGBA8 instead.
And check if ilGetInteger(IL_IMAGE_FORMAT) says GL_RGBA or something else with alpha inside.
If one of the two parameters does not contain an alpha channel this ain’t working.

Then take a step back and use the earlier code with the GL_ALPHA_TEST, just add the TexEnv GL_REPLACE mode for the pure texture colors.