Displaying large textures

Hi

I’m currently developing a retinal image registration application within Windows using consumer level Geforce 3 graphic accelerators.

I’m trying to display large images as texture maps to speedup user interaction with the system. I’m currently using glDrawPixels to display the images without problem, however the performance drops rapidly as the number of images increases.

Images of 10241024 display without problem as textures, if I up the resolution to 20482048 or 3096*3096 it displays nothing.

I have queried OpenGL which informed be that the maximum texture size is 4096*4096.

Are there any solutions to this problem or will I have to tile the images?

Thanks Ian

Well, 3096 is not a power of two, or am i wrong? But 2048x2048 textures should work. Maybe you could post some code of your texture initialization or how your render the quad, or whatever, with the applied texture.

Originally posted by Ian2004:
[b]
[…]
Images of 10241024 display without problem as textures, if I up the resolution to 20482048 or 3096*3096 it displays nothing.

I have queried OpenGL which informed be that the maximum texture size is 4096*4096.

Are there any solutions to this problem or will I have to tile the images?

Thanks Ian[/b]
You may be running out of memory in the graphics card. The maximum dimensions only give you a static range constraint, not the runtime constraint due to available memory. Check glGetError() after you download your texture.

Techniques to alleviate the memory consumption are:

  • Use as few components as possible in your textures (use luminance format if you only need grayscale display, for example).
  • Do not request a z-buffer/backbuffer if you don’t need it.
  • Create a pool of textures and download only the textures you will have in view.
  • Do not use full-scene antialiasing.
  • Reduce the resolution of your display.

If you have too many textures to fit in video memory they should just get swapped out to system ram when they are not in use. Seems unlikely you’re running out of system ram, too.

The max texture size returned by glget is not supposed to be an absolute value, it depends on the format too. I can’t remember the exact details, have a look at the documentation. AFAIK AreTexturesResident is the way to go about checking for a valid texture size.

Still, 2048 should work (3096 won’t work without texture rectangle or what’s it called extension), so long as you’re not using anything bigger than ubyte RGBA, then it might not.

I suggest you use AreTexturesResident to find out what’s happening with your larger textures.

Sounds like pixel-precise is a requirement, but if not you might consider using DXTC.

Originally posted by Madoc:
If you have too many textures to fit in video memory they should just get swapped out to system ram when they are not in use. Seems unlikely you’re running out of system ram, too.

That’s a valid point, but there’s a gotcha: what if the texture cannot fully fit in vidmem: 2048x2048 is 16MB for a single RGBA texture, maybe plus 1/3th of that if it’s mipmapped (or the hw needs all textures to be mipmapped), add to that z-buffer, colorbuffer + backbuffer, some space for GDI cache and some vidmem fragmentation here and there and that texture may not ever be able to be resident at all (and that’s without counting FSAA or dualview environments). Note he’s talking about a GF3 which I guess they had memory sizes around 32-64MB?

In that case The error INVALID VALUE is generated if the specified image is too large to be stored under any conditions.

Okay, there’s hardware that can do AGP texturing and then you have fully virtual memory hardware as well, but I wouldn’t rule out so fast that the problem is not that the texture doesn’t plain fit in memory.


The max texture size returned by glget is not supposed to be an absolute value, it depends on the format too. I can’t remember the exact details, have a look at the documentation.

Hummm that’s not how I read the spec:

The maximum allowable width, height, or depth of a three-dimensional texture image is an implementation dependent function of the level-of-detail and internal format of the resulting image array. It must be at least 2^(k−lod)+2bt for image arrays of level-of-detail 0 through k, where k is the log base 2 of MAX 3D TEXTURE SIZE, lod is the level-of-detail of the image array, and bt is the maximum border width.

I read that like “the maximum […] must be at least MAX_2D_TEXTURE_SIZE”, so that MAX_2D_TEXTURE_SIZE is actually the minimum of all the maximums for all internal formats.


AFAIK AreTexturesResident is the way to go about checking for a valid texture size.

To check for valid texture sizes in runtime, you actually have to use the texture proxy approach.


Still, 2048 should work (3096 won’t work without texture rectangle or what’s it called extension), so long as you’re not using anything bigger than ubyte RGBA, then it might not.

Heh, everybody picks on that 3096, my guess is that he meant 4096.


I suggest you use AreTexturesResident to find out what’s happening with your larger textures.

glAreTexturesResident won’t tell you much, in fact I find that function specially ill-designed(to me it looks like some kind of rnd() over the [true, false] domain), people use it for the wrong reasons and they endup believing that they can manage texture memory better than the driver can (go figure! :wink: ).

Originally posted by evanGLizr:
That’s a valid point, but there’s a gotcha: what if the texture cannot fully fit in vidmem: 2048x2048 is 16MB for a single RGBA texture, maybe plus 1/3th of that if it’s mipmapped (or the hw needs all textures to be mipmapped), add to that z-buffer, colorbuffer + backbuffer, some space for GDI cache and some vidmem fragmentation here and there and that texture may not ever be able to be resident at all (and that’s without counting FSAA or dualview environments). Note he’s talking about a GF3 which I guess they had memory sizes around 32-64MB?
I was merely pointing out that the issue should be whether you can fit a single texture that large in vid mem and not a number of them.

[b]

[quote]

The max texture size returned by glget is not supposed to be an absolute value, it depends on the format too. I can’t remember the exact details, have a look at the documentation.

Hummm that’s not how I read the spec:

The maximum allowable width, height, or depth of a three-dimensional texture image is an implementation dependent function of the level-of-detail and internal format of the resulting image array. It must be at least 2^(k−lod)+2bt for image arrays of level-of-detail 0 through k, where k is the log base 2 of MAX 3D TEXTURE SIZE, lod is the level-of-detail of the image array, and bt is the maximum border width.

I read that like “the maximum […] must be at least MAX_2D_TEXTURE_SIZE”, so that MAX_2D_TEXTURE_SIZE is actually the minimum of all the maximums for all internal formats.
[/b][/QUOTE]That would make most sense but I’m sure I’ve seen 128MB cards report 4096 max texture size when a mipmapped RGBA16 4096x4096 texture should take up about 170MB. Perhaps they can do something clever with swapping mips out? Or maybe I’m just wrong.
I have to say it’s also been a long time since I last had a look at the spec.

[b]

[quote]

AFAIK AreTexturesResident is the way to go about checking for a valid texture size.

To check for valid texture sizes in runtime, you actually have to use the texture proxy approach.
[/b][/QUOTE]Yeah, that’s right. I have some confused memory of reading about using AreTexturesResident to verify that the supposingly uploaded texture fit fine in memory. Almost definitely rubbish or a misinterpration of mine. I personally just lazily rely on glGet.

[b]

[quote]

I suggest you use AreTexturesResident to find out what’s happening with your larger textures.

glAreTexturesResident won’t tell you much, in fact I find that function specially ill-designed(to me it looks like some kind of rnd() over the [true, false] domain), people use it for the wrong reasons and they endup believing that they can manage texture memory better than the driver can (go figure! :wink: ).[/b][/QUOTE]Generally true but I think there could be some (very rare) cases where the application knows enough to improve on the driver’s management. Not that I’ve ever done anything with it, nor would I recommend it.

Ian’s problem remains a bit of a mistery. Have you tried glGetError, Ian? I suggest you litter your debug code with a few calls to glGetError as general good practice.