Tying to clearing out a bug found with users using 3d Slicer under Windows to visualize CT/MR volume data, I got to understand this value happened to be different on the same AMD GPU under Windows (=2048) or Linux (=8192). On NV cards this value seems to be always 16384.
Regardless the problem lie on AMD driver or not, I’m interested to understand in general what rule govern setting this value.
Reading OpenGL 4.6 specs I discoverd only
The maximum allowable width, height, or depth of a texture image for a three-
dimensional texture is determined by equation 8.3, where k is log2 of the value of
The equation being: maxsize ≥ 2^(k−level).
So if MAX_3D_TEXTURE_SIZE is 2048, for level of details 0 and k = log2(2048) = 11 this is meaning 2048.
What I didn’t get is how MAX_3D_TEXTURE_SIZE comes out.
Seems it’s not read from the hardware, otherwise the value will be the same under Windows and Linux for the same GPU model (not considering driver bugs, since this behavior is consistent among new and old architectues, even if I discovered a few cases on OpenGL GPU Info DB where i.e. old GCN 1.0 FirePro W8000 have this value 8192 under Windows, even if all the other cases of “Pro” series still have it 2048).
It’s choosen from the vendor? Evaluated considering some rule? Segmented depending from retail/pro products based on the fact many Pro users use it under Linux (??).