Ok it goes like this: Would it be faster to have one big texture (usually hardware support 4096x4096x3) created at the begining of the application and then use glTexSubImage for every texture used, using the 4096 grid to “fill in” the textures.
Then in the rendering loop use only 1 bind and then use glMatrixMode(GL_TEXTURE_2D) to scale and translate the necessary UV’s (using push/pop).
Would it be faster, instead of using multiple glBindTexture ???
I don’t know what kind of application that you’re making that can live comfortably in a single 4096x4096 texture.
But if yours reasonably can, then yes, it would be a performance win.
Then again, if your application uses so few textures as to be able to fit them all into a single 4096x4096 texture, you probably weren’t running into hardware problems to begin with.
Oh, and most ATi hardware doesn’t handle 4kx4k textures.
It’s probably better to sort all your objects of the same type by texture. Set the texture up, render all objects with that texture, then move on to the next one. This will make a huge difference in your rendering speed.
Be warned, 4k textures are really very slow to use on current nvidia hardware - there’s some kind of threshold crossed at 1024^2 which kicks it down a slower path. It’s not a major slowdown, but significant if you’re trying to claw back fps like I am.
Are you saying that 4096x4096 textures are more than 16 times slower than a 1024x1024 texture? Perhaps the slowdown you are referring to is simply the geometric increase in texel count.
I think O/P is asking “is it faster using a ‘texture atlas’ than binding a large number of small textures”. If I’m correct, I think we all know the answer, and would advice O/P to abstract texture coordinates to be able to use the ones in the atlas.
bobGL: Was my guess close?
I would just sort by texture usage, and not use the texture atlas approach.
That’s a 50MB texture, and you’re going to force GL to make it all resident at once. Once you subtract the framebuffer sizes, and any other overhead, you might get bumped out of VRAM on a 64MB system, and read the texture over AGP. You’re forcing GL to make an all-or-nothing decision.
If you don’t use texture atlases, then GL has a chance to try to establish a working set and keep as much of that as possible resident in faster memory. Finally, as others have noted, with a very large texture, the strides will be huge, and might cause slower rendering by blowing out caches. At the same time, some platforms may be faster (YMMV).
But really, I don’t like atlases just because they clutter up otherwise clean code.
using an atlas is gonna bugger up your ability to use mipmaps, hence may in fact be slower (its certainly a lot more work)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP_SGIS, GL_TRUE);
looks fine… to me… Am I missing something or?
In the mipmaps, the colors from one atlas will bleed into the other. You need to mipmap them yourself if you want good results.
Well that depends, mipmaping usually just sample 4 pixels and make it into one(for each level), this means that if all the sub textures in the atlas are a power of two and the number of subtextures are a power of two in each dimension (like a 16x16 grid of 64x64 textures) then they would not start to sample together until you reach a few levels down (in this case where the mipmap size is 16x16) and this is just fine when you are rendering the interface or something like that.
In the future i would use the GL_EXT_texture_array extension, that one could boost rendering times by lots, i will have to try it out when i can.