MAX_3D_TEXTURE_SIZE value origin

Tying to clearing out a bug found with users using 3d Slicer under Windows to visualize CT/MR volume data, I got to understand this value happened to be different on the same AMD GPU under Windows (=2048) or Linux (=8192). On NV cards this value seems to be always 16384.

Regardless the problem lie on AMD driver or not, I’m interested to understand in general what rule govern setting this value.

Reading OpenGL 4.6 specs I discoverd only

The maximum allowable width, height, or depth of a texture image for a three-
dimensional texture is determined by equation 8.3, where k is log2 of the value of
MAX_3D_TEXTURE_SIZE

The equation being: maxsize ≥ 2^(k−level).
So if MAX_3D_TEXTURE_SIZE is 2048, for level of details 0 and k = log2(2048) = 11 this is meaning 2048.

What I didn’t get is how MAX_3D_TEXTURE_SIZE comes out.

Seems it’s not read from the hardware, otherwise the value will be the same under Windows and Linux for the same GPU model (not considering driver bugs, since this behavior is consistent among new and old architectues, even if I discovered a few cases on OpenGL GPU Info DB where i.e. old GCN 1.0 FirePro W8000 have this value 8192 under Windows, even if all the other cases of “Pro” series still have it 2048).

It’s choosen from the vendor? Evaluated considering some rule? Segmented depending from retail/pro products based on the fact many Pro users use it under Linux (??).

The value is determined by the driver, based upon hardware limitations.

AFAICT, the only situations where that value provides a hard constraint are

  • You cannot attach a layer greater than MAX_3D_TEXTURE_SIZE-1 to a framebuffer.

  • You cannot attach a mipmap level greater than log2(MAX_3D_TEXTURE_SIZE) to a framebuffer.

  • You cannot use glGetTexLevelParameter* with a level greater than log2(MAX_3D_TEXTURE_SIZE).

As for the actual dimensions, the constraint you’re referring to is “≥”, i.e. any implementation-dependent limit must be at least that value. If all dimensions are less than or equal to that value, the texture cannot be rejected based upon dimensions alone (although it can fail with GL_OUT_OF_MEMORY). A texture with some or all dimensions exceeding that limit may be accepted.

If you want to determine whether a particular set of dimensions is supported, use glTexImage3D(GL_PROXY_TEXTURE_3D) then query the level’s parameters with glGetTexLevelParameter. If the requested dimensions are too large, the queried dimensions will be zero.

Another possible consideration is: some hardware has a different maximum limit for the number of 3D depth slices, as opposed to the width/height of a slice. But the GL spec only allows reporting a single value for all three dimensions. So, what should a driver report? Different driver teams might make different choices for the same hardware.

Some vendors have exposed the separate limits via extension.

It is per-implementation. And the Windows and Linux OpenGL implementations of AMD hardware aren’t the same. Most Linux AMD drivers are open-source, not predominantly made by AMD. AMD’s Windows drivers are, however. NVIDIA doesn’t work this way; they make drivers for both platforms.

There could be any number of reasons for the discrepancy. But the reason for the discrepancy doesn’t really matter: what matters is that it exists.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.