The thing is, calculation of regular decals is as simple as projective texturing. Also, dynamically creating them at runtime (bullet-holes and blood in FPS games) requires the scene’s geometry to be cpu-accessible and hopefully optimally traversable. In the past, BSP trees were sufficient and could aid everything in an optimal way. Right now, storing the whole scene or even just the prefab meshes in BSP trees is extremely suboptimal;
So, I guess everybody makes their own tricks on calculating decals - in a fashion that is optimal for the given project, its memory/cpu-budget, the geometry data-format and rendering pipeline.
- In some games, (Gantz) lots of decals would stick perfectly to everything, including on the skinned-meshes of playable characters and NPCs, multiple decals over each-other. (the decals get linked to the transformed position-coordinates of the vertices of the skinmesh instance).
- in Killzone2, blood-decals would even drip down and form puddles
- in some FPSes, decals would be limited to tiny bullet-holes, which were ignoring the underlying geometry, only take a collision point and normal - as the centre of an oriented quad with fixed size. You can see these quads protrude around wall ends.
- in other games (Ninja gaiden Sigma 2), the decal-receiving geometry is the extremely-simplified collision-hull mesh, used for gameplay. At places you can see the decals hover 30cm off the ground, but users rarely notice it.
Decisions, decisions . Still, the computation of UVs once you get a list of affected triangles is the same old. Getting the list, and its potentially big size is the nasty thing.
GL3.x can aid this in two ways: transform-feedback (in ways inspired by the recent transform-feedback instancing example) and deferred-rendering approaches.
Transform-feedback: to do some collision-testing on the gpu, using the existing scene VBOs for depth-only rendering. Lots of issues like back-projection to solve.
Deferred-renderer approach:
http://dl.dropbox.com/u/1969613/openglForum/post_splash_decal.jpg
I spent a minute to modify a light-shader of my light-prepass light-volumes renderer, to instead produce a world-position-based checkerboard: (just needs some extra tuning i.e add depth-comparison, and moving it out of the light-rendering into the shaded-diffuse rendering pass).
P.S. Here’s the modified part of the shader code:
float diffuse = 0;
for(int i=0;i< NUM_MSAA_SAMPLES ;i++){
float zOverW = texelFetch(texWSDepth,icoord,i).x;
vec4 WorldPos = DepthToWorldspacePosition(varPos,zOverW);
#if 1
vec2 checksiz = WorldPos.xy * 1.0; // multiply by scale.
int checkx = int(checksiz.x) ^ int(checksiz.y);
diffuse += float(checkx & 1);
#else
// here light attenuation was calculated, added to "diffuse"
#endif
}
diffuse/= NUM_MSAA_SAMPLES;
vec3 color = diffuse * u_LightColor;
glFragColor = vec4(color,0.0);