Efficient rendering of terrain layers

This is an old problem that has bugged me for a long time.

It isn’t difficult to figure out how to render layered terrain, but it is difficult to render it efficiently.

Right now I am thinking about doing this:

First render or calculate a top-down low-res texture of the entire terrain, maybe 1024x1024. Beyond a certain distance, render the terrain with this low-res texture instead of doing any blending.

Terrain that is nearer can be rendered in one pass by using a shader that blends as many as four textures together. The texture alpha values can be stored in a single RGBA texture with a linear filter and no mipmaps. The red channel indicates the first texture alpha level, the green channel indicates the second, and so on.

Terrain sectors can be calculated so that if only 2 texture appear on a sector, it will use a version of the shader that only does lookups for two textures. This can be handled using GLSL defines. If more than four textures appear on any one sector, the engine will only use four.

This seems like a pretty good setup, but I was wondering if anyone had any other ideas on this. I have read lots of articles on how to do the basics, but it is harder to make it fast. I have been thinking about either texture arrays or atlases, but the parts where two or more texture blend together makes those ideas very complicated.

I tried mapping the terrain by doing 3 texture lookups (one for each axis) and weighting the results by the normal. The results were absolutely beautiful, but it requires 3 texture lookups per texture, just for the diffuse map.

	albedo = texture2D( basetexture, gl_TexCoord[0].xz / layerscale ) *  abs(normal.y);
	albedo += texture2D( basetexture, gl_TexCoord[0].xy / layerscale ) *  abs(normal.x);
	albedo += texture2D( basetexture, gl_TexCoord[0].zy / layerscale ) *  abs(normal.z);
	//albedo /= (abs(normal.x) + abs(normal.y) + abs(normal.z));

If your terrain is not overhanging too much, and if you have enough temporal coherence (and disk space!), I would suggest something like megatexture aka. texture clipmaps , where the diffuse texture is already precalculated from the different layers, and paged from disk when needed. You will need 2 bilinear texture lookups per texture, to handle trilinear filtering yourself.

Some background theory, relyed on an SGI extension at the time, doable with shaders now :
http://techpubs.sgi.com/library/tpl/cgi-…#LE70321-PARENT

I know a lot of hardware people like them because what they get is just a plain texture, but I don’t really believe in the whole “megatextures” thing. The art pipeline is terrible, and I can’t get people to download a gig of data for one demo.

Now if you could dynamically render a 2048x2048 texture of the area around the player, and use that for close-up pixels, and use a large low-res texture for far away stuff, that might be really neat.

My guess is GPU capabilities will continue to improve and we’ll get to use more convenient shaders that don’t have to be optimized so badly, like my 3-axis mapping above.

I’ll probably end up using something very similar to what I described in my first post. Four textures can be handled with a single alpha texture lookup, so that is good. Then another for the normal map. Not sure how I will handle repeating normal maps.