Texture objects memory duplication

hi guys,

i’m new with openGL and i’ve been trying to do volume tiling and i’ve got a problem.

i’m using texture objects. so, for all the tiles in the volume i call glTexImage3D. i noticed that openGL makes its own copy of the texels so i end up with 2 versions of the volume on maintained by openGL and one is mine and i use it for other stuff.

is there any way to make the openGL just read the data whenever needed and not copy it? any thoughts are welcome :D…

thanks in advance.

You could kill your application’s performance by deleting the texture when you’re done using it and calling glTexImage* every time you want to use the texture. That would avoid having the texture lingering on your graphic card.

That seems like a pretty bad idea though.

Generally, going over the bus to send data to your graphics card should be avoided wherever possible. This is why your texture data gets stored on the card.

No, i only call glTexImage once per tile in the initialization phase only,then i let openGL handle what’s resident on the card and what’s not. it is very fast.
The problem is: can i tell openGL “do not copy this array just use this pointer if you need to access the data”?? or is there some other way to do this?

No, AFIK.
OpenGl need is own copy of the object cause he must send the data to the video card in case the video card memory finish its memory and need to swap some texture. The driver can’t use your pointer cause this can cause texture corruption, the driver run in protected mode and probably on another thread, there is no way to synchronize your code with the driver. So you can update your texture while the driver is sending it to the video card.

Apple suggest this extension to solve your problem
http://www.opengl.org/registry/specs/APPLE/client_storage.txt

If you are sending all your data to the driver, can can’t simply free your “client” memory?

The memory duplication you describe is pretty common with vertex data as well. It might even be worse then you think: it is likely that the driver has a copy stored in video memory and another copy in system memory (at least on Windows).

Many applications are ok with this because the data structure used for rendering, e.g. the vertex buffer, is organized differently than the data structure used in their application for collision detection, etc.

Look into DXT compression to reduce the amount of texture memory. This is widely used with 2D textures but I’m not sure if it is compatible with 3D textures.

Regards,
Patrick

Thanks guys… i’ll look into it further may be i’ll figure something out. i will let you know if i got something.

@ Rosario Leonardi

no, i still need the data for segmentation and other stuff.

Having GL store its own copy is very good for both layer isolation and performance.

If your need is more about memory usage reduction, maybe doing glMapBuffer/glUnMapBuffer on a pixel buffer object in GL_READ_WRITE mode will suit you.
In this case, both GL driver and your application share the same block of data. However while you read/write on it, it will not be available for rendering, so some synchronization and map/unmap is needed on your side.

Spec :
http://www.opengl.org/registry/specs/ARB/pixel_buffer_object.txt

Tutorials (beware, your use case is not covered, but you should be able to adapt) :
http://www.songho.ca/opengl/gl_pbo.html
http://www.mathematik.uni-dortmund.de/~goeddeke/gpgpu/tutorial3.html

Let us know if the benefit of the memory reduction overcomes the performance cost of read-write mapping.