Hi all, having a problem with glTexImage3D. Using a Geforce 6800 Ultra PCI-e 16x 512 MB. Every time I load a 3D texture, and calculate how much video memory it should use, it seems to be using twice as much as it should.
Here’s my code for loading a 3D texture:
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
ziSize, // depth
0, // border
GL_RGBA, // texture format
GL_UNSIGNED_BYTE, // texture type
vgh_data); // the texture
I’m confused - how are you querying the amount of video memory it’s actually using?
Hi knackered, it’s simply a matter of installing the Nvidia instrumented drivers and using Microsoft Perfmon in conjunction with them.
For compressed textures you can query how much space its taking up… I dont think 3D textures support compression, but if they do, you could try compress it and see if the value is still too big, otherwise it might indicate that the instrumented size reporting is off?
WELL, it seems that nvidia Cg had something to do with our double memory consumption. We (after various other problems with Cg) decided to switch over to GLSL, and to our surprise, discovered that the memory issue was gone! Not to mention a 4-fold increase in FPS.
1.4 rc something or the other. We tried getting up to the latest version, no difference!
Ok. Can you send me a repro demo that shows this? Send it to email@example.com. I’ll be sure to check into this first thing monday.