I’m considering myself as an OGL beginner, my topic could be advanced, though…
With my application I need to create many different textures objects. The solution we had was waiting for a out-of-memory to occure and then freeing some textures based on a LRU. This implementation is believed to cause memory fragmentation.
So, to prevent fragmentation of the OGL memory I created a pool of textures, so maybe whats called a working set by the Red Book (6th ed., Chapter 9: Texture Objects). These textures are created at startup and (basically) no glGenTextures is called nor a glDeleteTextures. (only the windows framebuffer exceeding one of the pool texture sizes is created)
I use GPU-Z to monitor the used VRAM. At startup, this is e.g. 50 MB. Even after the allocation of all textures it is the same amount, which is ok AFAIK, since the textures are not yet needed on the card and are held by the driver in SRAM.
The pool I allocated is roughly at 250 MB, holding RGBA8 and RGBA16F_ARB textures of variable 2^n textures. This is an estimation of the needed memory with 1 pixel RGBA8=4byte, 1 pixel RGBA16F_ARB=8byte and no additional mipmap (although some are created). However, at some point OGL responds with out of memory, when setting the texture data with glTexSubImage2D.
The allocated VRAM is displayed by GPU-Z to be at < 300 MB. Having a 512 MB VRAM (NV 9400 GT) there should be space left. GPU-Z might display a wrong number, but I don’t think so, currently.
1 - Is fragementation the only explanation for such an error?
2 - Is it usually a “bad idea” to put float data to a texture object with internal format GL_RGBA8? (not considering a overhead of converting the float to rgba8 conversion by the driver).
3 - Same as 2 but putting BGRA formatted data to RGBA?
Thanks for help and sorry for the long post