I find the Secret of openGL Texture.Am i right?

Absolutely positive DFrey, im using these texture sizes similar and following those guidelines in my current project. Just to let you know this is not using the AUX functions that do resize the textures to the nearest power of 2. This is not using any extensions but only glTexImage2D(), its quite possible that the driver is converting the texture size but if so it is unknown to me. Maybe someone from nvidia could verify this.

[This message has been edited by dans (edited 01-18-2001).]

If you are not using gluBuild2DMipmaps or gluBuild1DMipmaps to implicitly resize the textures, then this is news to me. How did you learn of this? Did you discover it by trial and error, or did you read of it in some documentation? If it isn’t in an official NVIDIA document I’d stay away from this “feature” as it may disappear in a future driver revision.

[This message has been edited by DFrey (edited 01-18-2001).]

Yes it was from trial and error, or more wouldnt it be really cool if i could use this texture size instead. I tried it and it worked. I have seen no documentation stating this as fact.

We absolutely do not support these 2^n+2^m-sized textures. They should produce GL errors, and there is no way that our driver logic could conceivably work with such a texture. Remember that gluBuild2DMipmaps will rescale to a power of 2 automatically, making it look as though any size is supported.

  • Matt

Ive gone back and looked at my texture wrapper it seems that the textures ive been creating using the 2^m + 2^n have the gluBuild2DMipmaps flag set. I tested without the flag set and they DO NOT work. Sorry my err.

Somebody mixed up the + with the *, I would guess.

How about GL_NV_texture_rectangle?

OK, even though this thread might be dead already, I want to make an assumption

For retrieving an arbitrary texel of a given texture with witdth w and heigh h, at coordinate x,y the renderer would have to do

addr = startAddr + y*w + x

if you use 2^n textures, it can be

addr = startAddr + y<<n + x

Now, I’m anything else but sure about hardware implementations, but a in my world a bitshift is still faster than a multiplication

Funny thing is, it should be implementation-dependent. Mesa3D for example (and, being a hardcore coder for fun, I mean the software renderers here) builds a table with one longword entry for every y, containing the start address of every row of any texture, when calling glTeximage2D()… so for Mesa it’s always one memory access, no matter what dimensions the texture has. I’m not sure why Brian did this, because Mesa still spits out an error when trying to use a texture that doesn’t have 2^n dimensions.

But, is it possible that the bitshift assumption is correct?

> Now, I’m anything else but sure about hardware implementations, but a in my world a bitshift is still faster than a multiplication

Multiplication is not a big deal.
It’s enough to have low precision multiplier : (11bit x 11bit) for a texture upto 2048x2048.
With a tiled internal representation for a textures (for example - 4x4) requirements are even lower - 9x9bit.

The real problem is GL_REPEAT.
For a 2^n texture it’s easiest thing. But for other sizes - it’s pain…
I guess, GL_NV_texture_rectangle allows only clamping.

>Multiplication is not a big deal. It’s
>enough to have low precision multiplier :
>(11bit x 11bit) for a texture upto 2048x2048

As long as you don’t need subpixel accuracy, but you’re right, texture interpolation would work with single-pixel, too.

>The real problem is GL_REPEAT.
>For a 2^n texture it’s easiest thing. But >for other sizes - it’s pain…

True, didn’t even think of that and overflows… shame on me :stuck_out_tongue:

Okay, so am I to understand that textures cannot be something like 128x256? These are each powers of 2. But do textures have to have the same dimensions or can the x-dimension be different from the y-dimension as long as they both are powers of 2?

They CAN be ANY 2^n*2^m where 2^n and 2^m <= MAX_TEXTURE_SIZE

Originally posted by Dodger:
As long as you don’t need subpixel accuracy, but you’re right, texture interpolation would work with single-pixel, too.


You cannot access texture array with “subpixel accuracy”. You have to use some integer values for a row and column.
(in case of GL_NEAREST these values = rounded texture coords).

Nehe explains this in his tuts. He also shows how to use gluBuild2DMipmaps to overcome the power of 2 limitation.