Decal (as Texture mapping)

Hi Guys,

I was thinking about using decals to inform the user about commands giving, as it is normally done with strategic games. But I ran into some problems, first the simple decals done in Opengl does not fit in a terrain with an irregular surface.

So I entered into some game screenshots and saw that they probably solve this with texture mapping. But there was another problem, as the textures needed to be passed increase I needed to pass a texture made of lot of textures to save variables. Fine, though there is another problem, when I need to print lot of the same decals at the terrain, how can I pass this information in uniforms?

Another possibility was to create a big texture of the map and print the decals in it from the Opengl size and pass this to the shader. Other possibility would be to pass a texture with a number in rgb specifying the texture to render in that position, but I guess other easier solutions exist.

Here is a screenshot showing the effect I am speaking, in case I was not clear:

So converting the broad to a specific question:

Is there a way to pass an array of any size of vec3 variables to the shader?

Yes, UBOs and tex-buffers, but that’s not what you need.
Simple case: 100 decals on your scene, the scene has 12000 tris; those tris take 500k pixels onscreen. With this approach, you’ll be doing 500000*100 texture-fetches and their projection-coord calculations.

Decals usually should be bound to triangles, extracted from the scene; work for the cpu.

So I should use the cpu? But to accomplish the effect on non linear surfaces, considering that I am using a plane with the figure to decal, I would have to break down a plane?

Are you really sure they do this way? Lot of projected images sounded easier.

I found this ( tutorial, but it is a bit old. This can probably be done with FBO or shaders and without much of the penalties of the described method.

Strangely enough, games have a extensive use of decals like the picture I posted before and the new books (for example the Akenine real time rendering) do not have any section about it.

It’s even more strange, when the book’s cover displays Little Big Planet:

A game, where decals are half of everything.


I thought that by bringing up this question here, we could create some brainstorm of alternative techniques around the subject. But it was not the case.

The thing is, calculation of regular decals is as simple as projective texturing. Also, dynamically creating them at runtime (bullet-holes and blood in FPS games) requires the scene’s geometry to be cpu-accessible and hopefully optimally traversable. In the past, BSP trees were sufficient and could aid everything in an optimal way. Right now, storing the whole scene or even just the prefab meshes in BSP trees is extremely suboptimal;

So, I guess everybody makes their own tricks on calculating decals - in a fashion that is optimal for the given project, its memory/cpu-budget, the geometry data-format and rendering pipeline.

  • In some games, (Gantz) lots of decals would stick perfectly to everything, including on the skinned-meshes of playable characters and NPCs, multiple decals over each-other. (the decals get linked to the transformed position-coordinates of the vertices of the skinmesh instance).
  • in Killzone2, blood-decals would even drip down and form puddles
  • in some FPSes, decals would be limited to tiny bullet-holes, which were ignoring the underlying geometry, only take a collision point and normal - as the centre of an oriented quad with fixed size. You can see these quads protrude around wall ends.
  • in other games (Ninja gaiden Sigma 2), the decal-receiving geometry is the extremely-simplified collision-hull mesh, used for gameplay. At places you can see the decals hover 30cm off the ground, but users rarely notice it.

Decisions, decisions :slight_smile: . Still, the computation of UVs once you get a list of affected triangles is the same old. Getting the list, and its potentially big size is the nasty thing.

GL3.x can aid this in two ways: transform-feedback (in ways inspired by the recent transform-feedback instancing example) and deferred-rendering approaches.

Transform-feedback: to do some collision-testing on the gpu, using the existing scene VBOs for depth-only rendering. Lots of issues like back-projection to solve.

Deferred-renderer approach:
I spent a minute to modify a light-shader of my light-prepass light-volumes renderer, to instead produce a world-position-based checkerboard: (just needs some extra tuning i.e add depth-comparison, and moving it out of the light-rendering into the shaded-diffuse rendering pass).

P.S. Here’s the modified part of the shader code:

float diffuse = 0;		
for(int i=0;i<   NUM_MSAA_SAMPLES    ;i++){
	float zOverW = texelFetch(texWSDepth,icoord,i).x;
	vec4 WorldPos = DepthToWorldspacePosition(varPos,zOverW);

	#if 1
		vec2 checksiz = WorldPos.xy * 1.0; // multiply by scale.  
		int checkx = int(checksiz.x) ^ int(checksiz.y);
		diffuse += float(checkx & 1);
		// here light attenuation was calculated, added to "diffuse"
vec3 color = diffuse * u_LightColor;
glFragColor = vec4(color,0.0);

Thanks a lot for the complete explanation, Ilian Dinev!

Sorry for the delay in the answer, but I think I got it.

In my case I use a heightmap and therefore knowing the triangles affected is not that difficult. And for adjusting the decal correctly in the UV, I would use CLAMP on S and T and just put the correct values based on some scale.

I will try it and post the result.

ps: I was thinking about using the same concept as shadows, that is, create a FBO, and render the decals to it in orthoview, after it using a shader to put the decals on the terrain. But it sounds much more complicated now…