Descriptors allocation from pool and

Hi,
My understanding of the API is that we can only allocate descriptors via a full descriptor set from a specific pool, providing a descriptor layout for that.
So in common situations, there would be several descriptor types within the layout/set. In that case, I guess the pool must have been created with all the requested types using the pPoolSizes array or it leads to an error, am I right ?

So my questions is

  • When creating a pool for several types, will the driver directly reserve memory for all of them or are provided sizes just a hint ?
  • If I have to use a pattern where a full pools are recycled for another (after GPU completion) using a reset. What’s happening when the pull is full for a specific descriptor type but not the others. Does the reserved memory wasted until the pool is reset and re-used or is the driver smart enough to re-use the associated memory for another pool ?

If the memory is wasted, that means allocation API design is not flexible enough imho, having only one descriptor type per pools would be a solution to avoid any waste BUT as the allocation is related to full descriptors layout & sets on just one pool, this involves having different descriptor sets for each descriptor type, which sounds a little bit overkill on the performance side…
Again it would have been great to get something more flexible such as allocating partial descriptors within a set, or being able to provide several pools for the allocation of one single set.

I would like to get your thoughts about that.

Thanks.

When creating a pool for several types, will the driver directly reserve memory for all of them or are provided sizes just a hint ?

They are not a “hint” in the sense that you cannot allocate past those limits. If you only put 10 sampler descriptors in a pool, then you cannot allocate more descriptor sets that use samplers if you’ve already spent those 10 samplers on other allocations.

Whether that means the driver will reserve that memory immediately or not is an implementation detail. Also, I’m not sure how “for several types” matters one way or another.

If I have to use a pattern where a full pools are recycled for another (after GPU completion) using a reset. What’s happening when the pull is full for a specific descriptor type but not the others. Does the reserved memory wasted until the pool is reset and re-used or is the driver smart enough to re-use the associated memory for another pool ?

Those are implementation details. The whole point of pools however is to make such memory reuse possible, so I wouldn’t be surprised if it didn’t allocate additional memory.

Even so, I would try to avoid this usage pattern. Your pattern is based on saying how many of each descriptor type you want, and then hoping that your descriptor sets can be allocated from that pool.

The way I feel that you’re supposed to use descriptor pools is that you decide up-front how many of each descriptor sets you want. That is, you say, “I’m going to allocate up to 20 of these sets.” You then count up the descriptors in those sets, multiply the numbers out, and build the pool so that there are enough of the right kind of descriptors to allocate exactly that many of those sets.

As such, you never run out of individual descriptors; you run out of sets to allocate from that pool. This also seems to be the designed intent of descriptor pools, since that’s what the fragmentation rules are about:

So if you do things the way I suggested, you will never encounter a fragmentation allocation failure.

You shouldn’t look at descriptor pools as a bunch of random descriptors; you should look at each pool as where you get a particular descriptor set from.