slow 2D-texturing in GLUT

Hi!
I got a problem with GLUT while drawing 2D-Textures. I’m using orthogonal projection with Double Buffering, Depth Testing and used LodePNG to load PNG-images with alpha-channel.
Drawing and texturizing a quad polygon works fine, but it takes about 2 seconds. Starting my programm with about 5 images at the main-screen takes about 4-5 seconds.
I can’t imagine why… I was able to trace the resource extensive commands. I thought they would be somewhere in loading the textures, because I used a lot of vectors, but the Problem is the drawing itself.

	
void DrawImageQuad(LodePNG::Decoder B_INFO,int xps,int yps,int imgnr)
{
	glBindTexture(GL_TEXTURE_2D, texture[0][imgnr]);
/*glBegin(GL_QUADS);
    glTexCoord2f(0.0f, 0.0f); glVertex2f(xps, yps); //Left top
	glTexCoord2f(0.0f, 1.0f); glVertex2f(xps, yps+B_INFO.getHeight()); //Left Buttom
	glTexCoord2f(1.0f, 1.0f); glVertex2f(xps+B_INFO.getWidth(), yps+B_INFO.getHeight()); //Right Buttom
	glTexCoord2f(1.0f, 0.0f); glVertex2f(xps+B_INFO.getWidth(), yps); //Right top
    glEnd();*/
}

Just by commenting these lines everything works without any delay. But of course it now doesn’t draw the images. ^^
Is there a faster way to draw them? Or do I have to set or unset any option I don’t know about?
By the way: Drawing in a 3D-World with normal projection is this slow as well. And when I set single buffering and use glFinish(); instead of glFlush(); I can actually watch the programm drawing every image - line by line.

Thanks in advance.

Seeems to me you have no hardware acceleration. Make sure you have latest OpenGL drivers installed etc. What platform are you working on?

No hardware acceleration? Hmm… I can play games which require hardware acceleration, I think.
I’m running Ubuntu 7.10 with an ATI X1300 and restricted graphics driver.
Where can I get the Latest OpenGL driver? Google is just giving me confusing results reffering to forums?

Edit: fglrxinfo should tell you the version of your OpenGL driver (if there is a driver)
Here’s what it tells me:

display: :0.0 screen: 0
OpenGL vendor string: ATI Technologies Inc.
OpenGL renderer string: Radeon X1300 / X1550 Series
OpenGL version string: 2.0.6473 (8.37.6)

Seems fine to me. Maybe I should try the Open ATI driver?

Ok, now see if GLUT actually gets an accelerated context. You can pull it by using glGetString(GL_RENDERER) — after the window was properly initialized etc.

Radeon X1000 series does not support non-power-of-two mipmapping.

And also the GL_REPEAT wrapping mode (which is the default for new textures) is not accelerated for NPOT textures.

Ok…
I can’t only work with power of 2 images. That makes the whole programm pretty inefficent. :-/

I tried to use the Open graphics driver of Ubuntu. Now I get the following message in the command line after running:

freeglut (/home/extinction/workspace/Glut/Debug/Glut): Unable to create direct context rendering for window ‘Extinction’
This may hurt performance.

So you may guess…
printf("%s
",glGetString(GL_RENDERER));

Mesa GLX Indirect

No direct rendering. :-/
Terminal says with open driver:

glxinfo |grep “direct rendering”
direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose)

The image drawing rate is “ok” with open driver, but it takes >5secs to get any input.

printf("%s
",glGetString(GL_RENDERER)); + restriced driver

Radeon X1300 / X1550 Series

Terminal says to restricted driver:

glxinfo |grep “direct rendering”
direct rendering: Yes

So use the TEXTURE_RECTANGLE_* target, if you don’t need mipmaps or wrapping.

Is it just TEXTURE_RECTANGLE_ARB or GL_TEXTURE_RECTANGLE_ARB?
I’ve taken a look at GL_TEXTURE_RECTANGLE_ARB and as I have read it should work with ATI grapic cards. And I don’t need mipmaps and wrapping. Just simple image showing.
So I initialized the texture rectangle with:

glEnable(GL_TEXTURE_RECTANGLE_ARB);
glActiveTextureARB(GL_TEXTURE0_ARB);

And replaced every GL_TEXTURE_2D with GL_TEXTURE0_ARB:

glBindTexture(GL_TEXTURE_RECTANGLE_ARB, texture[0][imgnr]);	
	glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGBA_FLOAT32_ATI ,w , h, 0, GL_RGBA, GL_UNSIGNED_BYTE, &image[0]);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB,GL_TEXTURE_MIN_FILTER,GL_LINEAR); // Linear Filtering
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB,GL_TEXTURE_MAG_FILTER,GL_LINEAR); // Linear Filtering

But I’m just getting a black screen. Sorry, but there doesn’t seem to be a good tutorial or good sample code. :frowning:

I suggest GL_ARB_texture_non_power_of_two extension instead of GL_ARB_texture_rectangle, even if ATI has a limited support for this extension. (That’s why ATI does not include this extension in the extension string yet.)

I have tested GL_ARB_texture_non_power_of_two on Windows and Linux with ATI card. NPOT works on ATI cards (no performance penalty) as long as you don’t use mipmapping.

The advantages of GL_ARB_texture_non_power_of_two over GL_ARB_texture_rectangle are:

  1. Mipmap filtering is supported.
  2. Texture border is supported.
  3. All wrap modes are supported. (GL_REPEAT)
  4. Paletted textures are supported.
  5. Texture coords are addressed by normalized value [0…1].
  6. No additional texture target is required for texture functions.

I think I’m fine with TEXTURE_RECTANGLE. I just set up the texture coordinates wrong, so every texture was stretched to 1 pixel - which was black. Framerate is >6000fps (My first thought was: wow) now and input proceeding is fine, too. :smiley:
I also tried GL_ARB_texture_non_power_of_two, but according to this demo: (emulated with wine)
http://www.ozone3d.net/demos_projects/mandelbrot_set.php
my graphics card doesn’t support it.
Well, thx for all the help. :slight_smile: