How much video card memory takes a 256kb texture?

Hi wanted to know if there is a way of knowing how much texture space is taking an uncompressed TGA into the video card
memory. Is there a way of knowing how much video card memory is been used by your app? Is there an OpenGL call to retrieve
this? Thanks in advance.

Regards.

as long as it is uncompressed, its simple:

width * height * bytes_per_pixel = bytes_at_all…

now widht and height are simply overgiven in glTexImage2D and are the same than the width and height of the image ( i think )… bitsperpixel… depends on use of alpha and what memory type ( 16bit tex or 24bit tex or 32bit tex )…

LOL , thats a question: how much memory does a 256 kb image take
Its like how much time does 1 minute take

Now back to the problem: As far as I know there is no way to determine how much memory a texture uses. It depends on 2 things: the size of the texture(you know what size your texture is) and internal format, and a 128x128 texture uses more memory with GL_RGBA16 internal format than with GL_RGB5_A1. The problem is that you dont know what internat format the driver is using, since the implementation must not support all 32(or something like that) possible formats you can specify in glTexImageXD for internal format. An implementation chooses the format that matches the one you specified in glTexImageXD best, thats all you know. SO as far as I know there is no way to determine which format the driver is actually using, please correct me if i’m wrong, I would be grateful for that.

-Lev

It just came to my mind when I was reading davepermen’s posting which is 2 minutes “younger” than mine. i’m correcting myself:

call glGetTexLevelParameter with GL_TEXTURE_INTERNAL_FORMAT as pname and it should give you the used internal format, then you can calculate the size widthheightbytes_per_pixel as davepermen said.

For compressed textures, its much more simple. Just call glGetTexLevelParamater with GL_TEXTURE_COMPRESSED_IMAGE_SIZE_ARB (assuming one uses ARB_texture_compression) and it will return you the image size.

Originally posted by Lev:
LOL , thats a question: how much memory does a 256 kb image take
Its like how much time does 1 minute take

Yes, I know…I just wanted to know if it can be calculated straightforward.

Edit: Tell me another thing. Are display lists stored in video cards memory too? Or it takes RAM?

[This message has been edited by Anonymous Coward (edited 06-06-2001).]

one more addition: when using mipmapping you must sum the sizes of all mipmap levels.

-Lev

dont know about other implementations, but with NVIDIA driver on a GeForce 1 display lists seem to use video memory as well as huge amount of system memory (i mean really HUGE). I’ve also heard that they take up much memory on other implementatins too. with NVIDIA, use vertex array range extension, it gives you the speed of display lists + you control how much and when the memory is allocated + are not static, which is a big improvement. On implementations that do not support vertex array range you could (should) use normal vertex arrays (glDrawRangeElements and similar). i’m using vertex arrays myself and they’re just fine. I’m not using vertex_array_range_NV yet, but after incorporation some other stuff in my project i’ll use it too, because it can give a very nice speedup in geometry-limited apps.

-Lev

// Code for computing the memory requirements of a texture.

// View width in pixels

long height = 1280;

// View height in pixels

long width = 1024;

// Front color buffer bit depth

long frontColorBufferDepth = 32;

// Back color buffer bit depth

long backColorBufferDepth = 32;

// Z-buffer bit depth

long zBufferDepth = 24;

// Stencil buffer bit depth

long stencilBufferDepth = 8;

// Pixel count

long pixelCount = height*width;

// Front buffer bytes

long frontBufferBytes = pixelCount*frontColorBufferDepth/8;

// Back buffer bytes

long backBufferBytes = pixelCount*backColorBufferDepth/8;

// Z buffer bytes

long zBufferBytes = pixelCount*zBufferDepth/8;

// Stencil buffer bytes

long stencilBufferBytes = pixelCount*stencilBufferDepth/8;

// Total used texture RAM (Bytes)

long totalBytes = pixelCount + frontBufferBytes + backBufferBytes + 
	zBufferBytes + stencilBufferBytes;

// Total used texture ram (Mbytes)

long totalMegaBytes = totalBytes/1024000;

//************** END

BTW: This came from an Excel spreadsheet I use. Email me and I will send you the spreadsheet.

[This message has been edited by pleopard (edited 06-06-2001).]

hum… read your second post, lev, after that went back to openglforum… and what do i see? you ask for the thing you think bout in your post just funny to see… not important in any way…

davepermen: yeah, I know its kinda stange, but anyway that was true that I said VAR is the way for GF cards

-Lev

Originally posted by pleopard:
long totalMegaBytes = totalBytes/1024000;

Huh? I’d rather divide by 1048576…

If I remember well, some cards (I think it was Intense3D or SGI Octanes) with dedicated texture memory had minimum block sizes for memory in which case texture occupation would be modulo that minimum block size. I wonder if there is still a constraint of that kind on nVidia or ATI cards.