8800 series texture formats?

Is there an updated version of this document anywhere?

OpenGL texture formats

The one on the web site is two (well, one and a half) generations behind now.

Or, alternatively, a way for me to query how much memory a given texture is taking? It’s important for me to make optimal usage of texture memory for a volume visualization app.

Or, alternatively, a way for me to query how much memory a given texture is taking?
No. And, by inference, I would not assume that a texture takes up (size of texture format) * (number of texels) either, nor would I try to assume the size of the framebuffer in bytes.

It’s generally a good idea to just use as little texture memory as possible and let the driver do what it can.

Originally posted by Korval:
It’s generally a good idea to just use as little texture memory as possible and let the driver do what it can.
Ahh, an idealist :slight_smile: . In the real world I may need to be able to tell the client something like this: “Yes, I am confident that we are using memory as efficiently as possible and the only reason you can’t load that dataset is because of hardware limitations”.

At the moment I’m storing the actual data values in a GL_ALPHA_FLOAT16_ATI texture (yes I need that precision) and the lighting data in a separate GL_LUMINANCE_8 texture.

If, internally, the GL_ALPHA_FLOAT16_ATI texture is really luminance alpha and is taking up 32 bits per voxel, then I may as well make a luminance alpha texture to start with and store my lighting values in there instead of using a separate volume.

So, for my case, not knowing what’s going on behind the scenes can cause a significant amount of wasted memory.

NVIDIA even realizes that it’s important for people to have this information, or they wouldn’t have published that chart in the first place.

In the real world I may need to be able to tell the client something like this: “Yes, I am confident that we are using memory as efficiently as possible and the only reason you can’t load that dataset is because of hardware limitations”.
Simple: Lie

If they’re going to believe you if you’re telling the truth, they’d believe your lie. And if they don’t believe your lie, they probably weren’t going to believe the truth either.

It’s not (only) a question of ideals; it’s a question of fact. You can’t ask the videocard how much memory its using. And any estimate you might make could easily be substantially wrong, and you’d never be able to tell.

In short, you have no tools to be able to ensure what you want. So there’s no point in trying.

You can get this information by using glGetTexLevelParameter.

Create the texture as normal, upload the image data via glTexImage*, then do something like this:

GLint red_size, green_size, blue_size, alpha_size, luminance_size;
GLenum target = GL_TEXTURE_2D; // change as needed
glGetTexLevelParameteriv(target, 0, GL_TEXTURE_RED_SIZE, &red_size);
glGetTexLevelParameteriv(target, 0, GL_TEXTURE_GREEN_SIZE, &green_size);
glGetTexLevelParameteriv(target, 0, GL_TEXTURE_BLUE_SIZE, &blue_size);
glGetTexLevelParameteriv(target, 0, GL_TEXTURE_ALPHA_SIZE, &alpha_size);
glGetTexLevelParameteriv(target, 0, GL_TEXTURE_LUMINANCE_SIZE, &luminance_size);
For GL_ALPHA_FLOAT16_ATI you should get 0/0/0/16/0, for GL_LUMINANCE_8 you should get 0/0/0/0/8.

This is pretty much how that NVIDIA texture format spreadsheet was created.

Originally posted by jra101: You can get this information by using glGetTexLevelParameter.
Thanks! This is exactly what I needed :slight_smile:

Originally posted by Korval:
If they’re going to believe you if you’re telling the truth, they’d believe your lie. And if they don’t believe your lie, they probably weren’t going to believe the truth either.
That’s a fantastic piece of logic!
I shall have to remember that one.
It does sound like the mantra of a bull****ter though.

Companies have been ruined because people started to lie about their software.

Who’s to say those same companies wouldn’t have been ruined if they didn’t lie about their software?

Originally posted by Jan:
Companies have been ruined because people started to lie about their software.
In the english language “because” implies a casual correlation.

Though there are certainly many companies, where you can’t tell, what exactly was the reason. However lying only rarely helps. A german saying says “Lies have short legs.” Meaning, you don’t really come far with them.

Anyway, this is off-topic, and an unpleasing discussion too, so we might want to stop talking about it.

Jan.

Originally posted by jra101:
You can get this information by using glGetTexLevelParameter.
This is a good idea, but you don’t have any guarantee that the driver isn’t lying to you.

For example, when you upload a 24 bit RGB texture, and query the internal format, it might tell you 0 bits of alpha. But you still don’t know if the layout in VRAM is 24 bit RGB or 32 bit XRGB. You just know that there are zero bits allocated for alpha.

Originally posted by arekkusu:
[b] [quote]Originally posted by jra101:
You can get this information by using glGetTexLevelParameter.
This is a good idea, but you don’t have any guarantee that the driver isn’t lying to you.

For example, when you upload a 24 bit RGB texture, and query the internal format, it might tell you 0 bits of alpha. But you still don’t know if the layout in VRAM is 24 bit RGB or 32 bit XRGB. You just know that there are zero bits allocated for alpha. [/b][/QUOTE]The reason for that may be to avoid confusion.
For example, people use to ask for a RGB8 backbuffer but the GPU uses RGBA8

If you write to alpha and readback, it just gives 1.0

If it actually worked as though you had alpha, then some moron would rely on that behavior.

It’s better when companies decide to offer the same support. Some may offer RGB8, some may upgrade to RGBA8 when you ask for RGB8