Feature Matching with Vulkan

Hi,

I try to match feature vectors extracted from images on the GPU with Vulkan. I basically have to upload two feature vectors per image pair, each of which is 2500x48 Bytes, and then compute the hamming distance. Can someone give me a hint on how to most efficiently transfer the features to and from the GPU? I tried to allocate one uniform buffer per image pair as a first try but realized the effective size is bigger than the maximum allowed by my graphics card. Is there a more appropriate memory type to use or do I somehow have to allocate multiple buffers and split my data?
Also what if I don’t know in advance that my feature vectors are exactly of length 2500 but can be smaller as well as larger without theoretically any upper bound?

/edit: I think I need to use a staging buffer as I saw from the device limits that it’s much much bigger. Follow-up questions: Should I look to reuse these buffers or is it cheap to allocate them i.e. for each image pair to match?

For such data you should use Shader Storage Buffer objects, that allow you to store data of arbitrary size. If you have an upper bound and the actual length may change you can allocate a SSBO with the max. size and pass the actual length (of valid vectors) to the shaders e.g. via a uniform and then use that in your loop to iterate over the data stored in the SSBO.

And as you noted you should use staging to copy your data from the host to VRAM for the best performance. Note that some implementations offer memory types that map VRAM directly into host memory, but if you need to support as many implementations as possible staging is the way to go.

1 Like

Thanks! That already helps.
What is the best way to pass the features to the shader assuming they are just byte vectors on the C++ side? In GLSL I only see fixed data types or arrays that also seem to prefer to have an upper bound. Am I better off using one of the sampler types for this?

Just use a Shader Storage Buffer object as noted above. They don’t require you to set a fixed size:

layout(std140, binding = 0) buffer Vectors
{
   Vector vectors[ ];
};

You can then use the length function of the ssbo to query it’s size, e.g. when iterating through it:

for (int i = 0; i < Vectors.length(); i++)
{
	...
}

I played around with this a little and it works pretty well so far. However I realized, the maximum array size is capped by how large I allocated the storage buffer to begin with. I guess there is no way around this and I still have to either determine an upper limit during buffer allocation time or reallocate and rebind a new buffer every time a new maximum comes in for the size of the features vectors?!

The get the results back, I’m ideally looking for something like this:

layout(binding = 1) buffer Results
{
    uint size;
    uint x[];
    uint y[];
    float f[];
};

So size determines the length of all three arrays but is dynamically determined on the compute shader (worst case I could bound it with the squared size of the feature vectors). However I realized that I can apparently only use a dynamic array when it’s the only and the last type specified in the layout, right? So how can I map the above to GLSL?

Well… what did you expect? You can’t have more memory than you ask for.

Well that’s never going to be possible. How could it know how big x is if there’s an equally-indeterminate y following it? And if it doesn’t know how big x is, how could it know how to even find y?

If these arrays are intended to be the same size, then you should just have a struct containing a single x, y and f, then have the last member be an array of such structs.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.