I’m trying to find the fastest way to do an offscreen render and convert the output data to JPEG. I’ve tried some JPEG encoding libraries and thus far the only one that seems to fit my time limit is some open sourced GPU accelerated encoder. Unfortunately it’s been created to only support RGB input so I need to either get rid of the alpha channel or modify the lib. I pretty much gave up on the second idea and try to figure out a way for the alpha channel discard.
I’m using Sascha Willem’s render headless example for the testing purposes.
Using memcpy is far too time expensive (~200ms for 1024x1024 image) so I’ve figured out that I’m gonna ask a question here to make sure that I’m not trying to do things the wrong way.
What I’ve tried:
- Simple image format change to R8G8B8 does not suffice as it’s not supported, at least from what I understand from calling vkGetPhysicalDeviceImageFormatProperties
- I hoped that maybe VkImageCopy or VkImageBlit would support such operation but apparently they don’t
I thought of creating a secondary render pass that would use a single compute shader to sample from the created image and write the RGB values to a buffer that I could later copy from and supply the encoder. I’m not sure if the secondary render pass is necessary, is there a way to sample/access the image inside the compute shader during the same pass?
If you can think of a better way to discard the alpha component or perhaps know of a better GPU accelerated JPEG encoder, I’d appreciate any suggestion.