I’m very tired of the old school “prealloc all your textures on startup” model to avoid run-time performance hick-ups with allocating and deallocating textures. I typically have “way” more textures than will fit on the GPU and don’t know a-priori counts of textures of specific iformats and resolutions I will need during any given execution. I also can’t resort to the standard “Loading screen” cheat to hide this. And I’d like to continue to use hardware-accelerated texture sampling/filtering (including clamping, auto-LOD, and anisotropic filtering), not reimplement all this to the detriment of performance.
For those that watch the extensions closer than I, do we have the capability yet to dynamically carve up GPU memory on the app side and dole it out as needed to textures of dynamically-discovered iformats and resolutions, without introducing alloc/free performance hick-ups?
When not needed anymore, these textures should then be individually freeable and the app can garbage collect the memory holes together for future reallocation of textures of different iformats and resolutions, again without introducing performance hick-ups.
Core/ARB functionality is preferrable, but if not and I can get this with NV extensions, then that’s acceptable. I’m looking for the ability to treat GPU memory as a big untyped memory arena (e.g. gpumalloc( HUGE_SIZE )) and then lay out textures on this as I see fit.
Thanks for any insight!
P.S. Additionally, do we have a way yet to create 2D texture array “views” where the slices within each texture might be individual 2D textures scattered across GPU memory?