Why are software OpenGl textures sometimes slow?

Hello,

I am drawing a rectangle with a texture stretched over it. The texture is a 512x512x4 RGBA image. I draw it orthographic, and I “zoom in/out” by changing the size of the box passed to glOrtho. I’m drawing using software OpenGL.

  1. When zoomed out so the entire texture consumes very few pixels on my monitor, drawing is fast.

  2. When zoomed in so that a small portion of the texture fills the entire monitor, drawing is slow.

  3. When zoomed in, but pan over, so that my monitor shows a peek of the texture – albeit a very zoomed in peek – drawing is fast.

Why is case 2 slow? Why is case 3 fast? What can I do to speed up case 2?

Thanks for your help!

Chris

PS: I am writing an app for the Panasonic Toughbook using the Atom processor. The slowdown appears on my fast desktop when hardware acceleration is disabled.

I don’t think it is related to the texture mapping. It is just that software rendering is fill limited.

Try to draw your rectangle without a texture in all 3 cases.
All the 3 cases might be faster compare to the 3 cases with texturing but there is a good chance that case 2 without texturing will be still slow compare to case 1 and 3 without texturing.

I think the reason the performance varies in those 3 cases is probably due to texture filtering (interpolation), or fill rate as mentioned by overlay.

If you want to know, call glDisable(GL_TEXTURE_2D) to disable texturing and repeat your 3 cases and measure the performance.

There’s little you can do about it though. Perhaps using GL_NEAREST texture filtering will be faster, but if it struggles with GL_LINEAR it might not make much difference anyway.

Thanks for your help. With texturing disabled, all three of these cases are lightning fast with no noticeable difference.

I’m tempted to believe my GL_TEXTURE_MAG_FILTER is slow. I am using GL_NEAREST for GL_TEXTURE_MAG_FILTER.

I tried a new test:

I increase my .png file (source of my texture) from 512x512 to 2048x2048 using an external image editing program (increase the resolution without adding any content). Now instead of drawing 1 rectangle with a 512x512 texture mapped to it, I draw 16 rectangles – each with a 512x512 texture mapped to it (when viewed together, it looks like the original picture). With this setup, OpenGL will be performing GL_TEXTURE_MAG_FILTER less frequently and GL_TEXTURE_MIN_FILTER more frequently.

Should this test run slower or faster or the same speed as the original cases?

Chris

PS: I tried this test and indeed it runs the same speed as the original case. Is this evidence that GL_TEXTURE_MAG_FILTER is not my problem?

Thanks again,

Chris

For magnification there is really only nearest or bilinear filtering, with bilinear being a bit more computationally expensive, but not that much. For minification there are several additional filters (trilinear + anisotropic), which are magnitudes more expensive than bilinear filtering.

Texturing is in generell quite expensive, if there is no dedicated hardware acceleration, i would suggest just to stick with nearest filtering for both minification and magnification, or maybe bilinear for magnification. Personally i would not increase texture sizes to prevent magnification to happen, 512 is quite big already, but with software rendering performance bottlenecks are quite different, so you need to try out for yourself, what gives you the best results. IF you increase the texture sizes, make sure to also add detail, otherwise it’s a complete waste.

Jan.

for best texturing performance on software implementation, use :

GL_TEXTURE_MAG_FILTER : GL_NEAREST
GL_TEXTURE_MIN_FILTER : GL_NEAREST_MIPMAP_NEAREST (don’t forget to build mipmaps)

Which software implementation do you use ? The microsoft one, of the Mesa one ?
Some perf tips for software rendering, some being specific to Mesa :
http://www.mesa3d.org/perf.html

The slowdown of speed is directly correlated to the total number of textured pixels to be drawn.