Mem alloc'd with vkAllocateMemory compatible with different memory types (same heap)?


I have a question regarding memory management in Vulkan. Generally, it is recommended to allocate a large memory block and then suballocate from it for all the different resources. But one thing is confusing me when I look at the vkAllocateMemory function. The information one need to provide to this function requires a memoryTypeIndex as opposed to a heapIndex. Does this mean that the allocated memory block is only compatible with the one memory type provided or is it also compatible with other memory types using the same heap? If it is compatible with other memory types using the same heap, why didn’t they use the heapIndex as information? Then it would be a lot clearer. Otherwise this would mean one need to allocate a memory block for each memory type as opposed to for each heap.

Thanks in advance for any incoming answers. :wink:

Otherwise this would mean one need to allocate a memory block for each memory type as opposed to for each heap.

… why?

The memory type vs. heap system keeps tripping people up, so let’s explain it very simply.

Vulkan recognizes two distinct concepts when it comes to memory. There are the actual physical pieces of RAM that the device can talk to. Then there are ways to allocate memory from one of those pools of RAM.

A heap represents a specific piece of RAM. VkMemoryHeap is the object that describes one of the available heaps of RAM that the device can talk to. There really aren’t that many things that define a particular heap. Just two: the number of bytes of that RAMs storage and the storage’s location relative to the Vulkan device (local vs. non-local).

A memory type is a particular means of allocating memory from a specific heap. VkMemoryType is the object that describes a particular way of allocating memory. And there are a lot more descriptive flags for how you can allocate memory from a heap.

So, we’re clear on that: heaps are “physical RAM”; types are about the style of memory allocation. Therefore, you allocate memory based on type; the heap that type is associated with is where the actual memory comes from.

That’s why memory types talk about access styles: cached CPU vs. uncached, coherent access vs. explicit flushing, etc.

A particular resource has limitations as to which memory types can be used to back that resource’s data. Some GPUs may not be able to read from textures that you have cached CPU access to. That’s not a property of the heap itself; that’s a property of how you allocated memory from that heap. That’s a property of the memory type.

So, getting back to your statement. There is no reason to “allocate a memory block for each memory type as opposed to for each heap”. You allocate memory based on the accessing that you want to do (of course, noting the heaps you’re pulling memory from and the requirements of the resources). If you need cached CPU access to a resource, then you allocate memory for that resource using a memory type that has cached CPU access.

Of course, this conflicts with the requirements for resources. As previously stated, a GPU may not allow you to put an image in memory that you have cached CPU access to. But the Vulkan API gives you the tools needed to ask that question (the memory type bits returned by vkGetImageMemoryRequirements), and if the implementation can’t do it, then you’ll have to work something else out.

Hey Alfonse,

that makes a lot of sense. Thanks for clarifying things. I think I didn’t ponder enough before writing the thread. :smiley:

So if I want to use memory arenas for my resources I need to allocate one for each access type, ignoring its underlying heap (physical memory); such as a staging arena (host visible & coherent), a device arena (device local), a read-back arena (host visible & cached), etc. Then if the resource supports the arena, I can sub-allocate from it. Keeps allocations low.

No! This “allocate one for each access type, ignoring its underlying heap” is exactly what you don’t do.

You cannot ignore heaps. They are, after all, where the memory ultimately comes from. You cannot allocate more memory than exists. If you allocate 512MB from two different memory types that use the same heap, you have allocated 1GB of memory from that heap.

You’re thinking way too mechanically here. Vulkan is not the kind of API where you can mindlessly just do stuff, particularly as it concerns memory allocations. Your allocation pattern needs to be based on two things: what your application needs to do its job, and what the implementation provides.

Your application needs, for example, images of some particular formats. And your application wants to put those images in the best possible memory. However, that “best possible memory” may not be host accessible. So you have to be able to deal with a number of possibilities:

1: Host-visible, device-local memory: You still need staging, since you need to copy to the optimal image format, so you’ll need additional memory, but it will be of this memory type. So you only need a single allocation.

2: Device-local, but not host-visible: You now need a separate staging memory allocation. And you’ll need to decide whether you want that allocation to be non-coherent or coherent, cached or non-cached, as is appropriate for what the implementation provides and your use cases.

And that assumes that the desired memory (device-local) will work for your particular image formats. Which is something you have to query. If it doesn’t work, then you have to find the best memory which will work.

You cannot have a reasonable allocation strategy without first deciding what your needs are.

Okay, saying ignoring the underlying heap was dumb. You’re right. It is clear for me that the sum of all allocations using a specific heap needs to be under the limit of it and maybe a bit more since other applications might have already taken pieces from the GPU memory cake as well. Sorry for being sometimes dumb but I am still a beginner in Vulkan and I am trying to learn and explore the API at the moment in Visual Studio. And the spec is not easy to understand sometimes (english is not my mother tongue).

So how would you approach memory allocations with Vulkan? Would you do allocations for each single resource or sub-allocate them from a special pre-allocated big memory block using the access pattern for your use case which is supported by the resource’s memory requirements (dictated by the driver/Vulkan implementation)? Actually I am exploring a strategy for memory pooling to keep expensive allocations low (also recommended by Nvidia’s blog). Are there already examples for this?

Example: Say I have a game level and need to load 10 textures into device memory (leaving staging or rather how the data gets transferred over PCI-E to device-local memory aside for now, we only talk about device-local memory). So my use case here is to have them in device-local memory. And say all these images with their formats support memory type A (device-local memory) queried via the memory requirements. Then I would allocate a big memory block (at least the sum of the required implementation-defined sizes plus the alignment requirements) using memory type A (considering the size of all past allocations and the limits of the underlying heap) and sub-allocate these level textures from this block. Which saves me from 9 further allocations. Am I wrong with this again? :stuck_out_tongue: