I'm missing some fundamental concept. I can use multiple textures, models and shaders

I’ve gotten one of the basic Vulkan examples working and it’s performing very well. Highest marks.

I’ve been doing various forms of graphics programming for a couple of decades - but never worked in kernel pipeline code. The terminology your team takes for granted is virtually Sanskrit at times. It’s taken me several weeks to understand that the ‘swap chain’ is the list of off screen buffers my code draws into.

I’m completely blocked on how to do something like this - in pseudo code

SceneNodeList {
SceneNode0 { transform, mesh, vertex color, normals}
SceneNode1 { transform, lines, vertex color }
SceneNode2 { transform, obj mesh, texture image, mipmaps...}
SceneNode3 { 2d overlay }
}

for (SceneNode: ScenNodeList )
   drawSceneNode(SceneNode);

In theory, such a list would draw a mix of 3D content, and then overwrite parts of it with a 2D UI frame. Inelegant, but fairly error proof. No clipping regions etc.

Each scene node could use a different shader, texture etc. Including no texture. They definitely will have their own transforms which are are concatenated with the view and root model transforms. That could result in hundreds or thousands of uniform blocks. Do they all go into one flat command stream?
For that matter, please define ‘command.’ I’m assuming it is equivalent to ‘draw call’ or ‘draw instruction’ and the the command list is like a script of openGL calls.

I’m blocked right now, because the highest priority case is the no texture CAD image. I can get all of the game images to work great, but if I don’t have a texture…
vkUpdateDescriptorSets throws an exception. Apparently setting descriptorCount= 0, isn’t legal.

If can omit some of the textures and a two pass draw so the textured models go first and the order is preserved works, but it’s an ugly solution.

My cad image looks a bit weird when I have to have an obj format potted plant in the scene to make the code work.

The code is large and scattered, here are the most pertinent snippets.

class Model : public SceneNode{...};

void Model::addCommands(VkCommandBuffer cmdBuff, bool hasImage, VkPipelineLayout pipelineLayout, VkDescriptorSet& descSet) {
	if (hasImage) {
		if (_textureImage) {
			VkBuffer vertexBuffers[] = { getVertexBuffer() };
			VkDeviceSize offsets[] = { 0 };
			vkCmdBindVertexBuffers(cmdBuff, 0, 1, vertexBuffers, offsets);

			vkCmdBindIndexBuffer(cmdBuff, getIndexBuffer(), 0, VK_INDEX_TYPE_UINT32);

			vkCmdBindDescriptorSets(cmdBuff, VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayout, 0, 1, &descSet, 0, nullptr);

			vkCmdDrawIndexed(cmdBuff, numIndices(), 1, 0, 0, 0);
		}
	} else {
		if (!_textureImage) {
			VkBuffer vertexBuffers[] = { getVertexBuffer() };
			VkDeviceSize offsets[] = { 0 };
			vkCmdBindVertexBuffers(cmdBuff, 0, 1, vertexBuffers, offsets);

			vkCmdBindIndexBuffer(cmdBuff, getIndexBuffer(), 0, VK_INDEX_TYPE_UINT32);

			vkCmdBindDescriptorSets(cmdBuff, VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayout, 0, 1, &descSet, 0, nullptr);

			vkCmdDrawIndexed(cmdBuff, numIndices(), 1, 0, 0, 0);
		}
	}
}

void Model::buildImageInfoList(std::vector<VkDescriptorImageInfo>& imageInfoList) {
	const auto& texture = getTexture();
	if (texture) {
		VkDescriptorImageInfo info;
		info.imageLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
		info.imageView = *texture;
		info.sampler = *texture;
		imageInfoList.push_back(info);
	}
}
	void VulkanApp::createDescriptorSets() {
		std::vector<VkDescriptorSetLayout> layouts(_swapChain._images.size(), descriptorSetLayout);
		VkDescriptorSetAllocateInfo allocInfo = {};
		allocInfo.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_ALLOCATE_INFO;
		allocInfo.descriptorPool = descriptorPool;
		allocInfo.descriptorSetCount = static_cast<uint32_t>(_swapChain._images.size());
		allocInfo.pSetLayouts = layouts.data();

		descriptorSets.resize(_swapChain._images.size());
		if (vkAllocateDescriptorSets(deviceContext.device_, &allocInfo, descriptorSets.data()) != VK_SUCCESS) {
			throw std::runtime_error("failed to allocate descriptor sets!");
		}

			if (imageInfoList.empty())
				descriptorWrites.resize(1);
			else
				descriptorWrites.resize(2);

			descriptorWrites[0].sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
			descriptorWrites[0].dstSet = descriptorSets[i];
			descriptorWrites[0].dstBinding = 0;
			descriptorWrites[0].dstArrayElement = 0;
			descriptorWrites[0].descriptorType = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER;
			descriptorWrites[0].descriptorCount = 1;
			descriptorWrites[0].pBufferInfo = &bufferInfo;

			if (!imageInfoList.empty()) {

				descriptorWrites[1].sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
				descriptorWrites[1].dstSet = descriptorSets[i];
				descriptorWrites[1].dstBinding = 1;
				descriptorWrites[1].dstArrayElement = 0;
				descriptorWrites[1].descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER;
				descriptorWrites[1].descriptorCount = static_cast<uint32_t>(imageInfoList.size());
				descriptorWrites[1].pImageInfo = imageInfoList.data();
			}
			vkUpdateDescriptorSets(deviceContext.device_, static_cast<uint32_t>(descriptorWrites.size()), descriptorWrites.data(), 0, nullptr);

It’s pretty clear that I probably need multiple pipelines and possibly more. The way the examples are structured, I can’t tell which portions are device 1-1, surface 1-1, sceneNode 1-1 etc.

1 Like

swapchain is pretty old, and established terminology across APIs. It is the part that handles hand-offs and arbitering access by the GPU, Display, and OS, which all need to touch the rendering if it is to be shown to the user.

Not really. That would be framebuffer, which could not necessarily be the swapchain images.

Yea, you do not typically want to draw a bag of random stuff with different needs. And opaque UI usually goes first as a cheap way to save on fragment shader invocations.

Sure, could be. There are also secondary command buffers in Vulkan, which allow you to parallelize this if you have large amount of Nodes. But then you would record them to one primary command buffer as well.

“command” is what the specification calls any function to sound cool. Recorded command is the vkCmd*, and they contribute to building the Command Buffer you vkBegined. There are state commands, synchronization commands, and action commands. State commands change the context of the subsequent commands recorded to the command buffer. Sync commands establish an execution and memory dependency. Action commands introduce some queue operations which get executed when the command buffer gets submitted.

It is also nonsense semantically. Writing zero descriptors is the same as not calling the vkUpdateDescriptorSets command at all.
In your code it seems descriptorCount is non-zero in either case though.

Thank you for your prompt and insightful response.

My needs, priorities and use case differs somewhat from yours so some issues such as the time expended in writing pixels twice aren’t pertinent in my application - yet.

I got past my blockage.
I’m writing this primarily for others who may encounter the same obstacle in the future.

Based on the gist of your answers and work we did years ago on OGL optimization, my solution follows.

Vulkan’s architecture pressures the developer into doing things more efficiently. Perhaps there are ways to work around it, but the path of least resistance is to do it the right way.

If you are dealing with multiple content representations, each should have it’s own graphics pipeline within the main pipeline. My terminology would call this blocking the command stream.

I now have a Pipeline base class which is over ridden for different characteristics (vertex, shader, image type etc.)

My app keeps a list of these pipelines and a simple scene graph. Each scene node appears once in the scene graph and once in the entire list of pipelines. It’s certainly possible for a scene node to appear with different representations within the pipeline list, but I haven’t advanced that far yet.

The application pipeline then loops through the pipeline list, which in turn loop through their scene nodes. This means each shader and it’s associated descriptor data is loaded once per draw frame, and all scene nodes appropriate for that pipeline are rendered as a block.

This is what I referred to has “flattening” in the original question.

Thanks for your help.
BT

To a degree. Often though it also applies that more specific is more efficient (e.g. specific layout vs LAYOUT_GENERAL, barrier vs semaphore, extension vs core feature).

It is new, so it is more like there is only one common sense path to do things. It does not have none of the practically-deprecated features of OGL. It is also more transparent\explicit, which does provide some intuition of what is happening, which OGL would hide from the user.

I think the term you are looking for is context roll.

Sounds that you are half-way to an ECS archiceture. “characteristic” sounds pretty close to a component.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.