Originally posted by Madoc:
If you have too many textures to fit in video memory they should just get swapped out to system ram when they are not in use. Seems unlikely you’re running out of system ram, too.
That’s a valid point, but there’s a gotcha: what if the texture cannot fully fit in vidmem: 2048x2048 is 16MB for a single RGBA texture, maybe plus 1/3th of that if it’s mipmapped (or the hw needs all textures to be mipmapped), add to that z-buffer, colorbuffer + backbuffer, some space for GDI cache and some vidmem fragmentation here and there and that texture may not ever be able to be resident at all (and that’s without counting FSAA or dualview environments). Note he’s talking about a GF3 which I guess they had memory sizes around 32-64MB?
In that case The error INVALID VALUE is generated if the specified image is too large to be stored under any conditions.
Okay, there’s hardware that can do AGP texturing and then you have fully virtual memory hardware as well, but I wouldn’t rule out so fast that the problem is not that the texture doesn’t plain fit in memory.
The max texture size returned by glget is not supposed to be an absolute value, it depends on the format too. I can’t remember the exact details, have a look at the documentation.
Hummm that’s not how I read the spec:
The maximum allowable width, height, or depth of a three-dimensional texture image is an implementation dependent function of the level-of-detail and internal format of the resulting image array. It must be at least 2^(k−lod)+2bt for image arrays of level-of-detail 0 through k, where k is the log base 2 of MAX 3D TEXTURE SIZE, lod is the level-of-detail of the image array, and bt is the maximum border width.
I read that like “the maximum […] must be at least MAX_2D_TEXTURE_SIZE”, so that MAX_2D_TEXTURE_SIZE is actually the minimum of all the maximums for all internal formats.
AFAIK AreTexturesResident is the way to go about checking for a valid texture size.
To check for valid texture sizes in runtime, you actually have to use the texture proxy approach.
Still, 2048 should work (3096 won’t work without texture rectangle or what’s it called extension), so long as you’re not using anything bigger than ubyte RGBA, then it might not.
Heh, everybody picks on that 3096, my guess is that he meant 4096.
I suggest you use AreTexturesResident to find out what’s happening with your larger textures.
glAreTexturesResident won’t tell you much, in fact I find that function specially ill-designed(to me it looks like some kind of rnd() over the [true, false] domain), people use it for the wrong reasons and they endup believing that they can manage texture memory better than the driver can (go figure! ).