I try to match feature vectors extracted from images on the GPU with Vulkan. I basically have to upload two feature vectors per image pair, each of which is 2500x48 Bytes, and then compute the hamming distance. Can someone give me a hint on how to most efficiently transfer the features to and from the GPU? I tried to allocate one uniform buffer per image pair as a first try but realized the effective size is bigger than the maximum allowed by my graphics card. Is there a more appropriate memory type to use or do I somehow have to allocate multiple buffers and split my data?
Also what if I don’t know in advance that my feature vectors are exactly of length 2500 but can be smaller as well as larger without theoretically any upper bound?
/edit: I think I need to use a staging buffer as I saw from the device limits that it’s much much bigger. Follow-up questions: Should I look to reuse these buffers or is it cheap to allocate them i.e. for each image pair to match?