Multiview deferred rendering help

I am trying to implement multiview rendering for VR purposes on an existing deferred shading pipeline and it’s not clear to me what the relation of the textures between the 2 passes should be.

It seems the gpass is generating 2-layered textures with the correct information (the objects are slightly offset when comparing left & right views/layers in Renderdoc) but in the final image (after the lighting pass) the right eye is wrong and the objects aren’t perfectly aligned when viewing in VR.

I don’t understand if I need to change the texture samplers in the lighting pass from sampler2D to sampler2DArray and then sample using also the gl_ViewIndex “coordinate” or is it something that Vulkan handles internally in this case (so it should be ok with the existing sampler2D) and I’m looking at a totally wrong direction. Thanks in advance for any clarifications.