Where does each eye view actually get sent to the HMD?

I’m trying to find where the rendered-to texture actually gets submitted to be visible in the HMD. It would be easiest for me to just render to my own texture and submit or copy that image, rather than deal with the OpenXR swap chain. I’m not even sure it’s needed? In SteamVR with OpenGL, I think there was a texture copy step called SubmitView or something like that.

You must mean xrEndFrame(). You have to use an XrSwapchain. There is no way to “just submit any texture” like OpenVR did.

Why would you avoid drawing directly into OpenXR swapchains and inflict yourself an extra copy?

Because of the mountains of code it takes to get anything done with Vulkan.

I am hoping that by the end of the day I can just get a blue screen appearing in the headset.

Here is my code to handle the VR rendering. xrEndFrame is returning XR_ERROR_LAYER_INVALID. Note there are some missing properties like the FOV, which I haven’t tried to calculate yet, but I cannot see any reason this would produce the error.

XrResult r;

XrFrameWaitInfo frameWaitInfo{ XR_TYPE_FRAME_WAIT_INFO };
XrFrameState frameState{ XR_TYPE_FRAME_STATE };
r = xrWaitFrame(device->xrsystem.m_session, &frameWaitInfo, &frameState);

XrFrameBeginInfo begininfo{ XR_TYPE_FRAME_BEGIN_INFO };
r = xrBeginFrame(device->xrsystem.m_session, &begininfo);

std::vector<XrCompositionLayerBaseHeader*> layers;
XrCompositionLayerProjection layer{ XR_TYPE_COMPOSITION_LAYER_PROJECTION };
layer.space = device->xrsystem.space;

std::vector<XrCompositionLayerProjectionView> projectionLayerViews(2);
for (int i = 0; i < 2; ++i)
{
	projectionLayerViews[i] = { XR_TYPE_COMPOSITION_LAYER_PROJECTION_VIEW };
	projectionLayerViews[i].pose.orientation.w = 1.0f;
	//projectionLayerViews[i].pose.position;
	//projectionLayerViews[i].fov = ;
	projectionLayerViews[i].subImage.swapchain = device->xrsystem.swapchain[0];
	projectionLayerViews[i].subImage.imageRect.offset = { 0, 0 };
	projectionLayerViews[i].subImage.imageRect.extent = { device->xrsystem.renderbuffers[0][0]->size.x, device->xrsystem.renderbuffers[0][0]->size.y};
}

layer.viewCount = projectionLayerViews.size();
layer.views = projectionLayerViews.data();

layers.push_back(reinterpret_cast<XrCompositionLayerBaseHeader*>(&layer));
XrFrameEndInfo frameEndInfo{ XR_TYPE_FRAME_END_INFO };
frameEndInfo.displayTime = frameState.predictedDisplayTime;
frameEndInfo.environmentBlendMode = device->xrsystem.bmode;
frameEndInfo.layerCount = (uint32_t)layers.size();
frameEndInfo.layers = layers.data();
r = xrEndFrame(device->xrsystem.m_session, &frameEndInfo);

The error above can be caused by not having xrAqcuireImage / Release image so I added that functionality.

		//---------------------------------------------------------------------------
		// Submit VR Views
		//---------------------------------------------------------------------------

		XrResult r;

		XrFrameWaitInfo frameWaitInfo{ XR_TYPE_FRAME_WAIT_INFO };
		XrFrameState frameState{ XR_TYPE_FRAME_STATE };
		r = xrWaitFrame(device->xrsystem.m_session, &frameWaitInfo, &frameState);

		XrFrameBeginInfo begininfo{ XR_TYPE_FRAME_BEGIN_INFO };
		r = xrBeginFrame(device->xrsystem.m_session, &begininfo);

		// Render view to the appropriate part of the swapchain image.
		for (uint32_t i = 0; i < device->xrsystem.viewCount; i++)
		{
			// Each view has a separate swapchain which is acquired, rendered to, and released.
			XrSwapchainImageAcquireInfo acquireInfo{ XR_TYPE_SWAPCHAIN_IMAGE_ACQUIRE_INFO };

			uint32_t swapchainImageIndex;
			r = (xrAcquireSwapchainImage(device->xrsystem.swapchain[i], &acquireInfo, &swapchainImageIndex));

			XrSwapchainImageWaitInfo waitInfo{ XR_TYPE_SWAPCHAIN_IMAGE_WAIT_INFO };
			waitInfo.timeout = XR_INFINITE_DURATION;
			r = (xrWaitSwapchainImage(device->xrsystem.swapchain[i], &waitInfo));

			//projectionLayerViews[i] = { XR_TYPE_COMPOSITION_LAYER_PROJECTION_VIEW };
			//projectionLayerViews[i].pose = m_views[i].pose;
			//projectionLayerViews[i].fov = m_views[i].fov;
			//projectionLayerViews[i].subImage.swapchain = viewSwapchain.handle;
			//projectionLayerViews[i].subImage.imageRect.offset = { 0, 0 };
			//projectionLayerViews[i].subImage.imageRect.extent = { viewSwapchain.width, viewSwapchain.height };

			const XrSwapchainImageBaseHeader* const swapchainImage = (const XrSwapchainImageBaseHeader*)&device->xrsystem.swapchainImages[i][swapchainImageIndex];
			//m_graphicsPlugin->RenderView(projectionLayerViews[i], swapchainImage, m_colorSwapchainFormat, cubes);

			XrSwapchainImageReleaseInfo releaseInfo{ XR_TYPE_SWAPCHAIN_IMAGE_RELEASE_INFO };
			r = (xrReleaseSwapchainImage(device->xrsystem.swapchain[i], &releaseInfo));
		}

		std::vector<XrCompositionLayerBaseHeader*> layers;
		XrCompositionLayerProjection layer{ XR_TYPE_COMPOSITION_LAYER_PROJECTION };
		layer.space = device->xrsystem.space;

		std::vector<XrCompositionLayerProjectionView> projectionLayerViews(2);
		for (int i = 0; i < device->xrsystem.viewCount; ++i)
		{
			projectionLayerViews[i] = { XR_TYPE_COMPOSITION_LAYER_PROJECTION_VIEW };
			projectionLayerViews[i].pose.orientation.w = 1.0f;
			//projectionLayerViews[i].pose.position;
			//projectionLayerViews[i].fov = ;
			projectionLayerViews[i].subImage.swapchain = device->xrsystem.swapchain[i];
			projectionLayerViews[i].subImage.imageRect.offset = { 0, 0 };
			projectionLayerViews[i].subImage.imageRect.extent = { device->xrsystem.renderbuffers[0][0]->size.x, device->xrsystem.renderbuffers[0][0]->size.y};
		}

		layer.viewCount = projectionLayerViews.size();
		layer.views = projectionLayerViews.data();

		layers.push_back(reinterpret_cast<XrCompositionLayerBaseHeader*>(&layer));
		XrFrameEndInfo frameEndInfo{ XR_TYPE_FRAME_END_INFO };
		frameEndInfo.displayTime = frameState.predictedDisplayTime;
		frameEndInfo.environmentBlendMode = device->xrsystem.bmode;
		frameEndInfo.layerCount = (uint32_t)layers.size();
		frameEndInfo.layers = layers.data();
		r = xrEndFrame(device->xrsystem.m_session, &frameEndInfo);

I’ve got though the code up to the call to xrEndFrame, but then a Vulkan validation error occurs:

Validation Error: [ VUID-VkImageCreateInfo-pNext-01443 ] | MessageID = 0x18d987f6 | vkCreateImage: VkImageCreateInfo pNext chain includes VkExternalMemoryImageCreateInfo with handleTypes 16 but pCreateInfo->initialLayout is VK_IMAGE_LAYOUT_PREINITIALIZED. The Vulkan spec states: If the pNext chain includes a VkExternalMemoryImageCreateInfo or VkExternalMemoryImageCreateInfoNV structure whose handleTypes member is not 0, initialLayout must be VK_IMAGE_LAYOUT_UNDEFINED (Vulkan® 1.3.261 - A Specification (with all registered extensions))

This appears to be caused by code inside OpenXR.

xrEndFrame then returns the XR_ERROR_LAYER_INVALID error.

Every other OpenXR command is returning XR_SUCCESS.

It appears SteamVR itself might be responsible for the validation error, but that still does not solve the OpenXR error:

My debugging callback makes a call to throw() when a Vulkan validation error occurs. This was somehow causing the xrEndFrame to return the error. If I disable the Vulkan error checking, xrEndFrame returns XR_SUCCESS.

So this was just caused by the Vulkan validation error in SteamVR. The same error also occurs with OpenVR according to other people. So it looks like Valve has something they need to fix in there.

I’m now doing a vkcmdcopyimage() operation to part of the framebuffer color image to the VR swapchain image. No XR or VK errors occur other than Valve’s error above, but I’m just standing in an empty black space, as if nothing is being drawn. At the same time, I can see the window on my PC monitor is a solid blue color.

Make sure you set the poses and FOV of the submitted views to the data you rendered with. The runtime uses the pose information to reproject the views, and if they’re set to 0, with will reproject from the wrong pose.

At this point I am just trying to get a blank screen drawing on the headset, so I don’t think the pose information will matter yet.

I get a lot of Vulkan validation errors about the layout state in the ReleaseSwapchainImage command, expecting it to be ATTACHMENT_OPTIMAL and instead it is KHR_PRESENT_OPTIMAL (or whatever its called).

In my own application I have to carefully track the current image layout of every mipmap in every image. How are we supposed to track the state of the swapchain image layouts if the image is created and managed by this third-party black box? If I don’t know the current image layout, I can’t transition the image layout to what its supposed to be.

You’re going to keep getting random API errors or unexpected behaviors with this philosophy.

There are details about expected layouts in the spec under the XR_KHR_vulkan_enable section.

In any graphics application the first step is typically to just get clear screen working. I did the same in OpenVR, Metal, Vulkan, iOS, Android, and OpenGL.

On the fourth loop, with three swapchain images (so we are looping back to the first image), xrAquireSwapchainImage causes a Vulkan validation error with SteamVR, saying the expected image layout is TRANSFER_SRC_OPTIMAL but instead it is COLOR_ATTACHMENT_OPTIMAL. According to the spec the layout is supposed to be VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL, so I don’t know why Vulkan is expecting the other.

Perhaps SteamVR is still doing an image copy like OpenVR does, and somewhere in that code this validation error is occurring.

I think SteamVR has had that (spec violating) layout issue for a long time. It shouldn’t prevent it from rendering though.

If the pose is wrong, the render may end up off-screen. Depending on a setting, SteamVR should either fade out to the compositor, or render mostly black to the screen (I think it ends up sampling memory outside of the swapchain image).

I suggest you listen to Rectus, since they are giving you the precise reason why the poses are needed.

You’re not drawing to a 2D screen, there are tons of additional considerations that are critical to get things to work properly, even just a blank screen or 1st triangle.

You may be right.

You may be right, and I appreciate the advice. I get extremely frustrated if I don’t see tangible progress on a daily basis.

Started a new implementation of Valve’s OpenVR this morning and I got a blue screen working in one day. Now that point has been reached, the rest could be finished in short order using that API. Of course I have to disable all Vulkan error checking during the SubmitView call, which makes me very nervous.

Both OpenXR and OpenVR are compiled in right now. Don’t plan to support both, but for now it is easy to bounce between them.