I am seeing weird results in VkSubresourceLayout in 2 cases. In both cases the implementation (Ubuntu 18.04 with Intel open-source driver) reports that the features I want to use are supported in linear tiling.
In the first case, which is with VK_FORMAT_R8G8B8_UNORM, the values in the VkSubresourceLayout returned by vkGetImageSubresourceLayout are the same regardless of the mipLevel value passed in the VkImageSubresource. offset is always 0 and the size and pitches are always the same.
In the second case, with VK_FORMAT_BC2_UNORM_BLOCK, and the image having multiple layers, the returned VkSubresourceLayout for mipLevel = 0 shows values for an uncompressed texture with a 4-byte texel. The texture is 256x256. rowPitch is reported as 1024, and depthPitch & arrayPitch as 262144 (1024 * 256). size though is reported correctly as 458752 which is the same as the the input data. There are 7 layers in this data and obviously 7 * 262144 > 458752. If I ignore the pitch values and just copy the data into the mapped image, everything works.
I don’t know if these things are implementation bugs or me misunderstanding the spec. If the former, i’d like to report them.
Please don’t waste your time telling me to use optimal tiling. I am working on a general texture loader that supports both.