Reasons for the lack of an API to query memory related info?

Hello everyone.

Currently I’m find myself wanting to know what I can expect when allocation memory for buffers and textures. What will go into VRAM? What and when will be put into system memory? How do implementations handle swapping data between the two? How efficient is that swapping mechanism for a particular implementation? I know: the basic question has been asked many, many times and the aswers are usually either “you can’t get info on what memory and how much of it is currently in use by the implementation” or “if you have an AMD/an NVidia GPU you can use their proprietary extension to a certain degree.” So far, so good.

What I’d like to know is why exactly, optimally including some driver dev insight, this seems to be next to impossible to standardize. As of yet, we got two propr. extensions, namely GL_NVX_gpu_memory_info and GL_ATI_meminfo. Obviously vendors at least understand the desire for wanting to able to query information about memory usage. I can understand that different hardware does things differently. I can understand that different platforms do things differently. What I cannnot understand is why implementation, that have to use the appropriate info anyway to do all the memory management magic under the covers, can’t allow the developer to query that info in a uniform manner. The only piece of information we get is GL_OUT_OF_MEMORY - and we don’t necessarily get that when there’s no VRAM left - we get this error when the implementation cannot get any memory - system memory for the current process included. Case in point, I can easily allocate 8GB (I stopped at this size) of VBO memory on an HD7970 with 3GB of VRAM in chunks of 1MB per VBO on Linux.

One could simply rely on the implementation to do the right thing and handle memory transfers according to the usage at runtime. Fine, I can live with that in principle. But what if I don’t want to? Or at the very least, if I want to know what the driver does under the hood so I can estimate how to choose my usage patterns accordingly? Especially in volume rendering, 1-3 GB of high-speed memory is nothing unless you employ compression algorithms like COVRA. In my daily business, 16GB of volume data is considered standard so streaming will most likely occur even with compression, unless one is able or willing to sacrifice an unreasonable amout of precision. In that case, knowing the performance characteristics of what the driver does internally is crucial.

Another thing: If it’s possible to get some memory info in a cross-platform way in OpenCL (even cache and cache line sizes), why not in OpenGL?

I guess this all boils down to the following questions:

[li]what differences in hardware and OSs prevent a standardized API from being conceived?[/li][li]if we already have two proprietary extensions, is determining a least common denominator really impossible?[/li][li]is it possible to get some more info on what implementations do when eviction occurs? maybe get some performance info through ARB_debug_output/KHR_debug?[/li][li]what about performance counters? at least AMD’s GPUPerfAPI provides a host of memory related counters.[/li][/ul]

Exactly! That’s the only way to go right now.

[QUOTE=thokra;1251945]I can understand that different hardware does things differently.[?QUOTE]
Nope! Different drivers have different policies!

Once again, it is up to the drivers only. That wouldn’t work on NV. For NV, as far as I have experienced, the performance is the paramount goal. On drivers I had tested a year or two ago, I couldn’t allocate objects larger than dedicated memory size (minus space initially allocated by the system).

[QUOTE=thokra;1251945]One could simply rely on the implementation to do the right thing and handle memory transfers according to the usage at runtime.[/QUOTE] That’s the only way.

Simply, there is no choice.

That’s highly recommended, and that’s why vendor specific extension exist.

Memory allocation is controlled by the drivers. I also don’t see any particular reason for the API standardization.

Not least denominator, but the union! Both extensions have some advantages over the other.

What exactly do you want to know?

AMD has an interesting extension (GL_AMD_performance_monitor), but unfortunately counters IDs are not publicly available. If I had AMD card I could discover the meanings of those IDs. I really don’t understand why they have buried such extension in favor of non-OpenGL API. Both vendors (NV and AMD) have their own PerfAPIs, but once again, although they are very alike, they are not part of OpenGL (and thus not standardized).

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.