3D textures: size, performance, palettes

I have been working with 3D textures for several years, first on SGI, now on nVidia cards, and have encountered several problems/issues I hope an nVidia contributer can address.

Although I have tried a few other chipsets with similar results, the following were all observed on a Quadro4 700 GoGL in a Dell laptop (I have the latest drivers).

I commonly divide a volume into power-of-2 sized bricks, using 3D proxies to see if the GL can handle the texture.

  1. When I upload a 3D brick larger than 4MB with an internal format of GL_COLOR_INDEX8_EXT (to do palettes for transfer functions), the proxy query is valid, so the upload succeeds, but no data is displayed. I have also occasionally seen this happen instead with 8MB or larger instead of 4MB, but am not sure why (same internal format, different dimensions). Is this possibly a driver bug or a hardware limitation? My current work-around is to restrict bricks to be no larger than 4MB (external format) if paletted textures are enabled.

  2. While on the subject of paletted textures, I noticed this feature has been deprecated for nv30 and higher, leaving only dependent texture fetches to achieve similar functionality. Can anyone comment on the expected performance of doing a dependent texture fetch for each filtered texel in a typical volume rendered image (which may contain several hundred blended polygons each requiring up to several hundred thousand pixels per polygon)?

  3. I have experienced some strange performance issues with 3D textures which I believe may be cache-related, but if someone could comment…
    For a fixed sized viewport, I can render a 256x256x256 luminance-alpha volume (as 8 bricks) with 100 slices (no more than 800 polygons) at about 20 fps. The GL reports I am using 32MB of texture memory.
    For the same viewport but a 1024x768x11 RGB volume, 8 bricks, 100 slices, I can only do about 3 fps. The GL reports I am using 27MB of texture memory.
    Less data is rendering more slowly?

  4. My software crashes on workstations with nv17 chips (according to .NET debug - within the nVidia driver) when I try to query the GL (using glGetTexLevelParameteriv) as to how much texture memory is being used. This is a very minor problem that I can work around, but was just wondering if this is a known bug or not.

Thanks!

  1. There is some bug with proxy textures. See here: http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/009367.html

  2. Long discussion here: http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/009234.html

Not sure what the issue is with 3, but I don’t think NV17 supports 3d textures, so…

– Zeno

[This message has been edited by Zeno (edited 05-12-2003).]

My bad:

Regarding #4, that is with 2D texture queries, not 3D. I know nv17 does not support 3D.