Nvidia card texture setting problem

Hello everyone,

after a long time wondering why my app returns an empty frame buffer but a valid depth buffer i figured that the problem was over limit use of memory.

i have a Geforce 2 mx400 with 32MB and i use
2kx2k 8 bit texture working offscreen with pbuffer 2kX1.5k with 8 bit color depth and 24 bit depth buffer.

this total to 16MB but this is not including index, vertex arrays and another small window i use.

but this is not realy what it takes as the advanced card settings for opengl says always use texture as 16/32/desktop bits which brings my memory consumption to at least 20/28 MB .

when i use 16 bit texture everything is ok.
but if i use 32 bit or desktop(which is 32) the frame buffer returns empty.

two problems :

  1. i dont understand why is it that i ask for 8 bit texture and recieve 16/32. isn’t the opengl code supposed to overide card settings.
  2. is there something that can be done using window API’s maybe to change those settings or maybe disable them at all.

This realy has gone on my nerves enough already. i plead for help. :confused

Thanks in advance,

1.) The switch in the control panel works for textures which do not really specify the internalFormat precisely (GL_RGBA8 stays 32 bit, GL_RGBA might be reduced to 16 bit in 16 bit color resolutions.)
Specify your internalFormat precisely!
2.) Does a GeForce 2 GTS support 8 bit textures natively? Don’t know, maybe paletted textures.
3.) How do you specify an 8 bit color pbuffer? The OpenGL hardware acceleration only works in 16 or 32 bit color depth normally.
4.) Memory usage:
1024768, RGBA 32 bit, double buffered, depth buffered needs about
1024 * 768 * (front + back + depth) * 4 bytes = 9 MB
A 2k
2k double buffered, depth buffered pbuffer would be 48 MB in the same settings => Out of memory.
Leave away everything you don’t need (back, depth?) and try again.

Just a suggestion for problem 1:
Do you use GL_RGB for the internalformat-parameter in glTexImage2D? If you do, use GL_R3_G3_B2 (or another sized format) instead. GL_RGB makes the system choose depth at will.

Oops, Relic was faster :slight_smile:

You’re welcome.

Originally posted by Relic:
A 2k*2k double buffered, depth buffered pbuffer would be 48 MB in the same settings => Out of memory.

The texture was 2k*2k, the pbuffer wask 2k * 1.5k.

Why would you have a double buffered pbuffer?

Hi all,
thanks for the short reply

1)i do use GL_R3_G3_B2 internal format.
2)i do not use a double buffer pbuffer.
3)i get a valid index for a PFD with 8 bit color pbuffer, so i guess it is HW accelerated.

how about the API to change the card settings from the code. does anyone know ?