Understaing per-object resources

Ok, before I am trying to go further, I am trying to enable validation layers. But there comes another problem.vkEnumerateInstanceLayerProperties returns only 2 layers - VK_LAYER_RENDERDOC_Capture and VK_LAYER_NV_optimus, but there is no VK_LAYER_LUNARG_standard_validation.
Digging some inside obtained, that HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Khronos\Vulkan registry folder is not created. Ok, done it manually:

[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Khronos\Vulkan]

[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Khronos\Vulkan\ExplicitLayers]
“D:\VulkanSDK\1.0.17.0\Bin\VkLayer_api_dump.json”=dword:00000000
“D:\VulkanSDK\1.0.17.0\Bin\VkLayer_device_limits.json”=dword:00000000
“D:\VulkanSDK\1.0.17.0\Bin\VkLayer_draw_state.json”=dword:00000000
“D:\VulkanSDK\1.0.17.0\Bin\VkLayer_image.json”=dword:00000000
“D:\VulkanSDK\1.0.17.0\Bin\VkLayer_mem_tracker.json”=dword:00000000
“D:\VulkanSDK\1.0.17.0\Bin\VkLayer_object_tracker.json”=dword:00000000
“D:\VulkanSDK\1.0.17.0\Bin\VkLayer_param_checker.json”=dword:00000000
“D:\VulkanSDK\1.0.17.0\Bin\VkLayer_screenshot.json”=dword:00000000
“D:\VulkanSDK\1.0.17.0\Bin\VkLayer_swapchain.json”=dword:00000000
“D:\VulkanSDK\1.0.17.0\Bin\VkLayer_threading.json”=dword:00000000
“D:\VulkanSDK\1.0.17.0\Bin\VkLayer_unique_objects.json”=dword:00000000
“D:\VulkanSDK\1.0.17.0\Bin\VkLayer_vktrace_layer.json”=dword:00000000

[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Khronos\Vulkan\ImplicitLayers]
“C:\Program Files\RenderDoc\x86\renderdoc.json”=dword:00000000

But getting still these 2 layers. Reinstalled SDK with latest driver and latest version - still the same. Its GTX 680 @ Win7 64 bit. Seems like some installation bug?

Are you actually linking to and using the Vulkan SDK?

What architecture to you target? The HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Khronos\Vulkan registry key is only for 32 bit application. If your application is targeting x64 check HKEY_LOCAL_MACHINE\SOFTWARE\Khronos\Vulkan.

Also check the VK_LAYER_PATH environment variable. The loader will look there for the validation layers. I use validation layers compiled from sources and as such have set that variable to my custom layer path.

Oh, found x64 branch - there were no explicit layers, so added them manually. Now layers are loaded.
They pointed me around 20 little errors, now both color attachments work, but there are still some validation errors. I assume, that the main error was with blended state, which was not extended for the new color attachment.

Yes, exactly. The number of blend attachment states must match your number of your color attachments. See this code from my deferred example.

So I guess it works now after fixing all the errors reported by the validation layers?

Yes, I was basing on your deferred shading sample, but this part I haven’t mentioned.
Now both attachments are rendered, so I will implement MRT rendering with my engine materials support before going further.

By the way, in deferred shading there always was always a trend to minimize g-buffer samplers count due to price for each additional texture lookup. Is it still true in vulkan? Or I can, for example, make a distinct sampler for normals-only as RGB, and do not mix it with anything else?

In terms of lookups this should still apply to Vulkan. So if you’re looking for performance pack and unpack where possible.

I am facing a stupid problem with uniform-buffer object passing. As soon, as it become complex, I assume that passing fails due to some compiling-time layout reordering/alignment.
I am trying to pass buffer from my application:

struct material_uniform_t{
float ambient[3];
float diffuse[3];
float opacity;
float specPower, specExp;
};

to

layout(set = 0, binding = 2) uniform MaterialBlock
{
vec3 ambient;
vec3 diffuse;
float opacity;
float specPower, specExp;
} ubo;

But It seems, like partially it is passed correctly, but not much of it. My c++ structure is 4-bit aligned (36 bytes). On shader side, I assume that vec3 = 3*float, so it should coincide. Maybe there are some rules of designing app-shader layouts, or maybe there is a link I can read it?

That’s an alignment problem. Try to stay away from using vec3s in your UBO or SSBO in general to avoid such problems or do a manual padding. With std140 (which is default afair) vec3s are aligned to 16 bytes, while on your C++ side they’re aligned to 12 bytes. Note that you can actually “fix” this on the C++ side by C++11’s alignas, but in general it’s better to just go with vec4s.

Technically, even alignas/_Align isn’t enough to fix it. C/C++ and GLSL simply have fundamentally incompatible alignment behavior. In C++, alignment affects the size of the object; in GLSL, it doesn’t (necessarily). If you align the two float arrays in C++ to 16 bytes, then opacity will be in the wrong place, because vec3 takes up 12 bytes, and opacity is 4-byte aligned. If you don’t align them, then diffuse (and everything after that) will be in the wrong place.

The only way to fix it (besides not using vec3) is to manually insert padding into the C++ data structure.

OK, I’ll use vec4 since now with float[4] on the application corresponding side. Now it seems working.
Now I’ve faced a design question about deferred shading implementation in terms of render passes.
My global design assumes, that after all the rendering I must have an offscreen image, which contains rendered scene. So, it does not involve swapchain and presenting image.
I am not sure, that I can use your sample design as a base, because it has fixed geometry/lights.
Geometry can vary from frame to frame, which is input for geometry pass. Lighs can vary too for lighting pass. Final pass is always the same.
My first intention was to create a renderPass with 3 subpasses. But then I understood, that I can not rebind some buffers during the render pass. Now it seems, like I need 3 renderpasses, which must be executed sequently?

[QUOTE=ultrablox;40569]OK, I’ll use vec4 since now with float[4] on the application corresponding side. Now it seems working.
Now I’ve faced a design question about deferred shading implementation in terms of render passes.
My global design assumes, that after all the rendering I must have an offscreen image, which contains rendered scene. So, it does not involve swapchain and presenting image.
I am not sure, that I can use your sample design as a base, because it has fixed geometry/lights.
Geometry can vary from frame to frame, which is input for geometry pass. Lighs can vary too for lighting pass. Final pass is always the same.
My first intention was to create a renderPass with 3 subpasses. But then I understood, that I can not rebind some buffers during the render pass. Now it seems, like I need 3 renderpasses, which must be executed sequently?[/QUOTE]

you can bind a different descriptor set which points to the proper data for each draw inside a renderpass.

I have done it as 3 render passes (1 geometry, 1 for lighting and 1 for final), and now I am facing new difficulties with lighting pass. It works fine with 1 light, but I want it to work with dynamic set of lights. I assume, it should be done via instancing (at least I would do so in opengl). My assuming steps are:

  1. Clear both diffuse and specular accumulators
  2. Enable blending as simple addition
  3. Prepare light buffer of all the point lights
  4. Render with per-instance light data feeding
  5. Prepare light buffer of all the spot lights
  6. Render them with per-instance light data feeding

But first of all, which render pass/subpass design should I use? Now I think of 3 pipelines with 3 renderPasses (clear, point lights, spot lights), but maybe its not good?