Binding different vk descriptors for different models

Hi.
First time posting.
Quite new to graphics programming. First time with Vulkan. Right now I’m not trying to write the most optimal code. I actually plan writing this entire project from scratch after it’s done. Hence the bloated code.

I set up Vulkan as per Branden Gela’s tutorial and then I implemented textures following Lukasino’s tutorial.

Currently in my scene’s constructor I’m setting up a global pool:

globalPool = lve::LveDescriptorPool::Builder(lveDevice)
		.setMaxSets(lve::LveSwapChain::MAX_FRAMES_IN_FLIGHT)
		.addPoolSize(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, lve::LveSwapChain::MAX_FRAMES_IN_FLIGHT)
		.addPoolSize(VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER,lve::LveSwapChain::MAX_FRAMES_IN_FLIGHT)
		.build();
	loadGameObjects();

And then when it’s being run I set up a descriptor set layout and write my texture to a vkdescriptor.

auto globalSetLayout = lve::LveDescriptorSetLayout::Builder(lveDevice)
		.addBinding(0, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, VK_SHADER_STAGE_ALL_GRAPHICS)
		.addBinding(1, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, VK_SHADER_STAGE_FRAGMENT_BIT)
		.build();

	lve::Texture texture = lve::Texture(lveDevice, "Images/Sprites/Blaze.png");

	VkDescriptorImageInfo imageInfo = {};
	imageInfo.sampler = texture.getSampler();
	imageInfo.imageView = texture.getImageView();
	imageInfo.imageLayout = texture.getImageLayout();

	std::vector<VkDescriptorSet> globalDescriptorSets(lve::LveSwapChain::MAX_FRAMES_IN_FLIGHT);
	for (int i = 0; i < globalDescriptorSets.size(); i++) {
		auto bufferInfo = uboBuffers[i]->descriptorInfo();
		lve::LveDescriptorWriter(*globalSetLayout, *globalPool)
			.writeBuffer(0, &bufferInfo)
			.writeImage(1,&imageInfo)
			.build(globalDescriptorSets[i]);

Now this is where the tricky part comes in. I’m told I need create a separate descriptor for each texture-model combination. Or add all the textures to a texture array? Or rewrite each texture once it’s been rendered.

But with my setup, I use a frame info which passes all the gameobjects in my scene as well as everything in the vk descriptor.

auto commandBuffer = lveRenderer.beginFrame();
			if (!commandBuffer) continue;
			lveRenderer.beginSwapChainRenderPass(commandBuffer);
			int frameIndex = lveRenderer.getFrameIndex();
			lve::FrameInfo frameInfo{
				frameIndex,
				frameTime,
				commandBuffer,
				camera,
				globalDescriptorSets[frameIndex],
				gameObjects
			};


			simpleRenderSystem.getlvePipelinePtr()->bind(commandBuffer);
			lveRenderer.bindPipelineEssentials(commandBuffer, frameInfo);
			simpleRenderSystem.bindDescriptorSets(frameInfo);
			//update
			lve::GlobalUbo ubo{};
			ubo.projection = camera.getProjection();
			ubo.view = camera.getView();
			ubo.inverseView = camera.getInverseView();
			pointLightSystem.update(frameInfo, ubo);
			uboBuffers[frameIndex]->writeToBuffer(&ubo);
			uboBuffers[frameIndex]->flush();


			//order here matters
			simpleRenderSystem.renderGameObects(frameInfo);
			pointLightSystem.render(frameInfo);

			nrd::NrdDebugLinePipeline* debugPipelinePtr = simpleRenderSystem.getDebugPipelinePtr();
			debugPipelinePtr->bind(commandBuffer);
			lveRenderer.bindPipelineEssentials(commandBuffer, frameInfo);
			simpleRenderSystem.bindDescriptorSets(frameInfo);
			//render
			// Update and render the debug line
			simpleRenderSystem.renderGameObects(frameInfo);
			lveRenderer.endSwapChainRenderPass(commandBuffer);
			lveRenderer.endFrame();

So how do I update the frame info to take a different texture for different gameobject?

This is how my render function is set up btw:

void SimpleRenderSystem::renderGameObects(
		FrameInfo & frameInfo)
	{
		//lvePipeline->bind(frameInfo.commandBuffer);


		//update
		for (auto& obj : frameInfo.gameObjects) {
			if (obj->model == nullptr) continue;//IF THE Gameobj->ct doesn't have a model
			//obj->transform.rotation.y = glm::mod(obj->transform.rotation.y + 0.0001f, glm::two_pi<float>());
			//obj->transform.rotation.x = glm::mod(obj->transform.rotation.x + 0.0001f, glm::two_pi<float>());

			
			obj->bind(frameInfo);
			obj->draw(frameInfo, pipelineLayout);
		}
	}

I plan on animating the game objects by changing the UV coordinates. I don’t know if there’s anything I should consider at this stage for it to work. I’m trying to make a 2.5D beat em up. I know using Vulkan for it is Overkill, but I really want to learn Vulkan while making something I like.

So how do I update the frame info to take a different texture for different gameobject?

There’s a couple ways of doing it, one way to do it is to bind a different texture for each draw which means you would need a separate descriptor set for each object so not generally recommended if you can instance the objects.

what you could do is have a texture array then in the vertex input you could index into the texture array like that either instanced or per vertex. (the vertex input is where you also take in things like positions, texture coordinates, colours etc… just so you know what I am talking about). That way you only have one descriptor set with all the textures.

EDIT: just to show you what the shaders would look like

First method:

#version 460 

layout(location = 0) out vec4 OutColor;

layout(location = 0) in vec3   FragColor;
layout(location = 1) in vec2   TexCoord;

layout(set = 0, binding = 0) uniform sampler2D Sampler;

void main() 
{
   OutColor = texture(Sampler, TexCoord) * vec4(FragColor, 1.0); //or whatver logic
}

Second Method:

#version 460 

#extension GL_EXT_nonuniform_qualifier : enable

layout(location = 0) out vec4 OutColor;

layout(location = 0) in vec3   FragColor;
layout(location = 1) in vec2   TexCoord;
layout(location = 2) in float    TextureIndex;

layout(set = 0, binding = 0) uniform sampler2D Samplers[];

void main() 
{
   int ID = int(round(TextureIndex));
   OutColor = texture(Samplers[ID], TexCoord) * vec4(FragColor, 1.0); //or whatver logic
}

note for the second method you do need to have descriptor indexing enabled in the features. you do this by providing

VkPhysicalDeviceVulkan12Features Vulkan12Features{};
Vulkan12Features.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_2_FEATURES;
Vulkan12Features.descriptorIndexing = VK_TRUE;
Vulkan12Features.runtimeDescriptorArray = VK_TRUE;

and include it in the pNext linked list in your normal features.

1 Like

Okay, the first method made no sense whatsoever. It just looks like a very basic fragment shader. Unless there’s something I’m missing. What exactly does the uniform sampler2D do?

I do this in the pipeline? right? or the render system?

Okay, so I create an array of textures, and for each game object, I want it to have a a texture index, and then I get the texture index when I’m binding/drawing?

What exactly is the difference between binding and drawing?
Is binding just assigning a pair, and drawing the process of actually drawing?

Okay, the first method made no sense whatsoever.

Since you are making a draw call for each object individually, a shader invocation (what i showed you) will be invoked, i showed you the fragment shader which essentially runs for every pixel/fragment of your object. In between these draw calls you bind different descriptor set containing the different texture.

What your loop could look like

for(int i = 0; i < NumberOfObjects; i++)
{
   vkCmdBindVertexBuffers(...);
   vkCmdBindIndexBuffers(...);

   vkCmdBindDescriptorSets(..., &ImageDescriptors[i], ...); 
   /*
    if the descriptor set you are binding is a combined image sampler that is what the sampler will be in 
    the shader i showed you. You would switch it for each object.

   */

   vkCmdDrawIndexed(...); //or vkCmdDraw if you are not using index buffer
}

What exactly does the uniform sampler2D do?

A sampler essentially takes a normalized coordinate and gets the colour of the pixel at the coordinate, essentially like a pointer to a texture/image. I would recommend reading this if you want more information Sampler (GLSL) - OpenGL Wiki

I do this in the pipeline? right? or the render system?

You enable these features when creating your logical device then use them in your program whenever.

Okay, so I create an array of textures, and for each game object, I want it to have a a texture index, and then I get the texture index when I’m binding/drawing?

You get the texture index from the actual vertex buffer you are using, alternatively you can use something called push constants which can contain the current texture index of an object using vkCmdPushConstants(…) before drawing and entering in the texture index, you will need to setup the push constants layout in your graphics pipeline. It is in the struct VkPipelineLayoutCreateInfo and filling in the push constants information is fairly straight forward.

You use push constants in your shader like this:

layout(push_constant) uniform PushConstant
{
   int TextureIndex; //or whatever information
} u_PushConstant;

Note that push constants have a small limited size (I think the spec guarantees 128 bytes but I haven’t read it)

What exactly is the difference between binding and drawing?

When you are binding a resource, like a vertex buffer you are essentially telling vulkan thats the resource it should use in rendering. for example when you bind a vertex buffer you tell vulkan thats where it gets the positions, texture coordinates etc…

I am trying not to be harsh but I think you should have started off with an easier API such as OpenGL before going into Vulkan. There is some basic knowledge of graphics programming that needs to be had if you want to go into Vulkan quickly otherwise it seems that you have skimmed over all the important topics, make sure to truly understand what you are learning and go through it slowly. If you want graphics quickly start with OpenGL then after you have made a few simple projects with it move onto Vulkan.

1 Like

Okay, I will go through all of these and give it a try.
I already implemented push constants

That’s true tbh. But from what I gathered, OpenGL is extremely dated and is on it’s way out. And the techniques they use are extremely niche and not transferrable. I’m trying to create a unique project for my portfolio to get noticed by recruiters.

Thanks for the help.

OpenGL is extremely dated and is on it’s way out

I think a bit inaccurate there are still many applications built on top of OpenGL.

And the techniques they use are extremely niche and not transferrable.

Not really sure what you are talking about here “techniques.” The technique they use (assuming you are talking about the rasterization pipeline) is very similar to Vulkan and supports compute pipelines as well. Vertex, index, storage buffers and images are all a thing on OpenGL as well minus the descriptor set requirement. the thing that Vulkan gives you is more control over the driver so that any error cannot be attributed to it. It has also introduced hardware accelerated ray-tracing.

I’m trying to create a unique project for my portfolio to get noticed by recruiters.

If you need the extra functionality of Vulkan then I don’t see why you shouldn’t use it. I would just take your time learning complex topics so that when it comes to debugging you know what you are actually debugging or how to implement new features. Good Luck!

1 Like