Someone else also was noticed that old graphics are using double dedicated memory?

Hi!
One or Two years ago my app used around 1GB of dedicated memory from GPU, these last days I was surprised when I notice that my app uses now 2.2GB of dedicated memory … but all resources and buffers are the same!

In fact, now I did a test, I included a big buffer on the startup of my app, and the Vram consumption was the same! What I am missing?

Also, I am surprised that when compiling shaders it takes 300Mb Vram, that is strange :confused:

I am using GTX1060 with the latest drivers.

That is interesting.

What numbers are you looking at?

With NVX_gpu_memory_info…:

  • Total GPU Mem = GL_GPU_MEMORY_INFO_DEDICATED_VIDMEM_NVX,
  • Unused GPU Mem = GL_GPU_MEMORY_INFO_TOTAL_AVAILABLE_MEMORY_NVX
  • Used GPU Mem = Total - Unused
  • Kicked out of GPU Mem = GL_GPU_MEMORY_INFO_EVICTED_MEMORY_NVX

All in KB.

If you just issued the commands to allocate it, but didn’t do anything to force its creation on the GPU by the back-end driver, then it was likely never actually allocated GPU memory.

This is the classic issue with startup loading in OpenGL. Merely submitting the commands doesn’t get things done. It’s actually doing rendering that demands the creation of those resources that gets things done.

Another possibility: In NVIDIA GL drivers, buffer objects can exist either in GPU memory or in CPU pinned (page-locked) memory, either one accessible to the GPU during rendering. By default, the driver seems to prefer GPU memory. But if you do certain operations like MapBuffer, then by default the driver moves the buffer object out of GPU memory (“VIDEO memory”) into CPU memory (“HOST memory”) where it’s cheaper to access from the CPU. If you enable a GL debug message callback (LINK1, LINK2), then the NVIDIA driver will tell you all about when it does this and to which buffer object.

Are you sure we’re still talking GPU memory and not CPU memory? How are you observing this 300 MB mem consumption?

I wonder if it has something to do with preloading all the NVIDIA shader assembly IR and ISA for all GL shader programs that it’s ever seen your application use before. Try removing your NVIDIA driver shader cache. On Windows, I think that’s by default stored at: C:/Users/${USER}/AppData/Local/NVIDIA/GLCache/. On Linux, ${HOME}/.nv/GLCache/. But there are env vars to enable/disable/configure its use, relocate it someplace else, etc. Websearch __GL_SHADER_DISK_CACHE, __GL_SHADER_DISK_CACHE_PATH, __GL_SHADER_DISK_CACHE_SKIP_CLEANUP, etc.

Hi, Dark_proton, thanks for the reply I know that this does not include OpenGL specifications.

I am using GPU-z and when I am talking about dedicated memory from GPU I am meaning the memory in use that GPU-z shows for the selected GPU (In my case just the main one GTX 1060 3GB).

Also in the resources monitor (windows), I noticed that the shared memory doesn’t change a lot when the app is running (+/- 50mb) so that can’t explain the increase.

*In fact, the app only takes +/- 500MB of ram on the CPU side (Using visual Studio profiles)

I am asking myself if windows is the problem here.

If you just issued the commands to allocate it, but didn’t do anything to force its creation on the GPU by the back-end driver, then it was likely never actually allocated GPU memory.

Well, I am declaring a new VAO object and I fill it with glBufferData. (I mean, all the process: glGenVertexArrays, glBindVertexArray, glBindBuffer, glBufferData .ect), but seems that as you said I am ding wrong, I need to launch draw commands also.

Are you sure we’re still talking GPU memory and not CPU memory? How are you observing this 300 MB mem consumption?

Yes, using GPU-z and creating a profile, I can stop my app to see how much Vram is taking in each step, when I forced Sleep(500000) before and after launch shader loads (glShaderSource, glCompileShader) I can notice clearly around extra 300Mb of Vram.

I am loading vertex, geometry, and fragment shaders, I do no using any specific Nvidia implementation (In fact I still coding on GLSL)

Me pregunto si tiene algo que ver con la precarga de todos los IR e ISA del ensamblaje de sombreado de NVIDIA para todos los programas de sombreado GL que haya visto usar antes en su aplicación. Intente eliminar la memoria caché del sombreador del controlador NVIDIA. En Windows, creo que está almacenado de forma predeterminada en: C:/Users/${USER}/AppData/Local/NVIDIA/GLCache/ . En Linux, ${HOME}/.nv/GLCache/ . Pero hay env vars para habilitar/deshabilitar/configurar su uso, reubicarlo en otro lugar, etc. Websearch __GL_SHADER_DISK_CACHE , __GL_SHADER_DISK_CACHE_PATH , __GL_SHADER_DISK_CACHE_SKIP_CLEANUP , etc.

Thanks, very usefull I will test now!

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.