Feature Matching with Vulkan

Hi,

I try to match feature vectors extracted from images on the GPU with Vulkan. I basically have to upload two feature vectors per image pair, each of which is 2500x48 Bytes, and then compute the hamming distance. Can someone give me a hint on how to most efficiently transfer the features to and from the GPU? I tried to allocate one uniform buffer per image pair as a first try but realized the effective size is bigger than the maximum allowed by my graphics card. Is there a more appropriate memory type to use or do I somehow have to allocate multiple buffers and split my data?
Also what if I don’t know in advance that my feature vectors are exactly of length 2500 but can be smaller as well as larger without theoretically any upper bound?

/edit: I think I need to use a staging buffer as I saw from the device limits that it’s much much bigger. Follow-up questions: Should I look to reuse these buffers or is it cheap to allocate them i.e. for each image pair to match?

For such data you should use Shader Storage Buffer objects, that allow you to store data of arbitrary size. If you have an upper bound and the actual length may change you can allocate a SSBO with the max. size and pass the actual length (of valid vectors) to the shaders e.g. via a uniform and then use that in your loop to iterate over the data stored in the SSBO.

And as you noted you should use staging to copy your data from the host to VRAM for the best performance. Note that some implementations offer memory types that map VRAM directly into host memory, but if you need to support as many implementations as possible staging is the way to go.

1 Like