I’m doing large scale volume rendering using 3D textures. It’s not unusual for these textures to be in excess of 300 megs and be loaded to several GPUs. The problem is this: when I create a texture the driver creates a copy which persists in local memory. This duplication of memory is taking a toll on how much system memory I have available for the application. Is there any way around this?
I’m using nVidia Quadro (X2, not in SLI) cards on Windose XP with the latest drivers.
There is no way around it but there will be.
The upcoming release of OpenGL will allow to disable memory backup. Should be out this summer if there will be no delays although you will have to rewrite at least part of your application.
APPLE_client_storage and APPLE_texture_range can eliminate the framework and driver copies, leaving only the application original data. Shipping for several years, on Mac OS X.
Thank you for your replies!
Where might I find the draft standard for the version due out this summer? I’ve been searching around OpenGL.org and Khronos, but without any luck.
Thanks in advance!