Huge terrain... Is this method reasonable?

Hello all,

I have just started on a project to create Middle Earth (from LOTR). I have scaled it down quite a bit, seeing as the real middle earth is 3000 KM long. Here is a map These are the specs for the size i’m trying to create:

real world
- 500 miles or 3000 KM
- 3000000 meters

scaled world — I’m doing this one
- 200000x200000 meters
- 20000x20000 units

-zone size 1000x1000 units
20x20 zones (or 400 zones)

5 miles - 8 KM, 8000 meters
800 units

The 3D world is based on units
1 unit = 10 meters
1 meter = 0.1 unit

I am planning to create a texture and a height map for each zone, all being in seperate files. Then i will have a detail texture to multitexture into the terrain. Each zone will be loaded from the Hard Drive as needed. When drawing it I will use a method of LOD (or geomipmapping) and will only draw within a 5 mile (800 unit zone). It will have the zone your in, and the 8 zones around it loaded.

I also plan to add buildings and trees, probably using glCallList() and also use a method of LOD for this. When far away it will be a simple blended texture, and as you grow near it will fade into the actual object. Should I have more detail levels for this? Maybe fade into a less complex object first than a more complex object?

Does this seem like a reasonable method to draw a world of this extent? I know the file sizes will reach a few GB.

Thank you for any replies

I don’t think so, unless you play it real clever. Take your math down to the triangles you’ll be playing with and you’ll see too.

You say you have 8 zones loaded at a time, with each being a 1000x1000 grid. Let’s assume you managed to do this with strips, even with mipmapping.

One zone is 1,000x1,000 vertices, or 1,000,000 vertices. How much space do these vertices consume? Let’s assume you have specified positions, normals and texture coordinates, all 32-bit floats, 4 bytes a piece. So each vertex is 3 floats * 4 bytes + 3 floats * 4 bytes + 2 floats * 4 bytes = 32 bytes.

1,000,000 of those vertices is 32,000,000 bytes, roughly 30.5 MB. So… you want 8 of those zones? You won’t cram that all on the card at once, per frame, as that’s 244MB. Unless you have a 256MB; then maybe, but you won’t do much else with the card with all that data stewing in there.

Furthermore, you’ll have to check the recommendations on how much data to shove into a buffer at once. 30.5MB is pretty big.

You’ll have to balance it somehow, and it looks like you’ll have to switch buffers or use vertex arrays with vertex buffers. Then you’re depending on the speed you can shove stuff down the pipe.

My take on this when I looked at it for my own stuff was to keep the zone the view is in loaded in a vbo, and stream everything else downstream after I had culled and geomipmapped it.

I haven’t tried it, so beware.

Just to correct…

You say you have 8 zones loaded at a time
I’m going to have 8 zones loaded around the center… so technically 9 zones.

Another thing is that I will load the height data, not a 3D vertex, so I will only store the Z value in RAM, not the 3D point. I will also store the texture coordinates in the RAM.

That is… about 11 MB a zone (100 MB total). I probably wont use normals because it has a strange looking artifact with Geomipmapping, but I might.

Ya… that is a lot of RAM to be used. As for sending data to the graphics card, the geomipmapping(LOD) will limit that down.

Another thing that seems misunderstood is that I’m only seeing a field 1600x1600 at a time (sight range being 800 units) That mean’s i’m sending (considering i’m drawing everything at once) about 12 MB a frame. The geomipmapping will cut that at least down to 4 MB, or less.

For those who dont know what geomipmapping is… It’s when you reduce the triangles as the terrain get’s further away, a method of LOD.

I can also do some calculations so it doesnt send the data behind the camera (FOV). This will cut the data in half.

As for VBO’s, I’ve never used them… I prefer sending raw data… but I know you can change data in a VBO in real-time (correct?)… so maybe I should consider learning how to use this?

Thanks again…

As for VBO’s, I’ve never used them… I prefer sending raw data…
Without VBO’s, you’re never going to be able to get the performance to actually render this database.

Ok… so it looks like i’m having to use VBO’s. So with a VBO, should i render the whole terrain (not including buildings/trees) at one time, or should I still use geomipmapping??

If anyone knows of any good tutorials for VBO’s i would greatly appreciate it.

Thanks again

I would suggest that you make your zones smaller (step outside and look a km down the road - everything seems REALLY small at that range). And then look at a LOD & culling system that will greatly reduce the amount of data that needs to be sent to the card each frame.

IIRC ROAM is worth looking at. And Oct Trees.

And then you need to decide if you need each zone to be a grid of X x Y quads? I’m not sure if ROAM does this but there are clever ways that you can simplify the mesh of a terrain, without losing (much of) the detail you require.

For example if you are walking across a plain that is 90% flat, then it’d be much more efficient to represent it with a couple of tri’s.

Just though I’d point out:

The source is clean OpenGL (use VBO) and renders HUGE terrain areas. (Nvidia only demo)

ROAM is not the way to go these days. You’ll end up CPU limited, while the gfx card sits and waits for triangles, when it should be constantly busy. You should use something along the lines of chunked lod (or geomipmapping, which is almost identical). This way you don’t touch the vertex/index data each frame, you just need to update vertex/index buffers when you move from one zone to another. Your CPU work is then reduced to merely recursively selecting an LOD level for each patch before rendering. Chunked LOD also enables you to implement a fast occlusion culling algo, and geomorphing is a doddle.

Ok… thank you for your replies, you have been very helpful. Right now my plans are to make smaller zones using a LOD technique for the terrain.

I got another question… I was testing out the scaling method of 1 unit = 10 meters, and it started to look a little messy. The way i do terrain is by using a large texture and stretching it over a segment of terrain and then multitexture a detail texture into the terrain. Everything is great… The only problem is to get it to look good with this size of a world you need a 200 GB hard drive.

To put it simply, i need a good looking technique, that doesnt take a huge amount of hard drive space (under 5 gig)… any ideas??

Thanks for all your help

hmm if you are going for huge terrains then Geometry clipmaps might be of some use to you.

Aside from the demo implimentation I dont know of anyone using it, I’ve had a couple of reads of the paper and I’m hoping to get around to giving an implimentation of my own a go soon, certainly with VBOs as it seems perfect for it.

Very nice article… I have done something similiar to it before, but it has some nice techniques to use. The one potential problem i see with it is that I am using zones… Which i’m now thinking to resize to 500x500 instead of 1000x1000, but i havent made in choices yet.

I’m still at a loss when it comes to texturing my terrain. If i use detail textures and stretch a large texture (2048x2048) over each zone that’s about 30 GB of hard drive space (for a simple texture map). Multitexturing each quad wont work because of the LOD; the quads change sizes as the sight goes out.
The way i’m seeing it right now I might have to stretch a texture over the whole zone (1000x1000) and deal with a low quality texture… but then how would I got about making dirt roads without being able to paint the texture??

How many units of terrain would a VBO be able to handle without any optimization (no LOD or geomipmapping) on an average graphics card? If it’s a decent amount I might be able to get away with multitexturing…

So many questions (sorry to bother you)
thanks once again for helping

I’m suprised no one has mentioned it, but have you looked into Chunked LOD with skirting?

Its one of the most GPU hardware friendly ways of rendering large landscapes.

You also might want to look at generating pseudo-random local fractal detail at runtime and using a lower resolution heightfield, instead of storing a huge mesh at really high resolution.

vipm, chunked lod and geomipmap all use a finite number of detail levels, more or less naive depending on how you triangulate the different levels. But they all require extra memory for the index data … more data to store on hd and page in …

If you are going to play with huge terrain looking at SOAR might be an option … ranger mentioned earlier is an extension of SOAR, better triangle stripping, better culling … but the culling can be enhanced further using aabbs.

Unless Im missing some trick … storing only the height data is not gonna help you if you want to use vbos, you’ll have to store all three vertex coordinates.

Unless Im missing some trick … storing only the height data is not gonna help you if you want to use vbos, you’ll have to store all three vertex coordinates.
Not necessarily. In some cases (uniform blocks) it is possible to store height in one VBO and x/z in another, using a vertex program to combine them. The x/z VBO may be shared between blocks.

Stephen_H, your surprise is a surprise seeing as though suprisingly I recommended chunked LOD only a couple of posts before your surprised comment. :eek:

I’d recommend using a paged system of some kind. I’ve recently added support for generating whole earth paged terrrain databases to the OpenSceneGraph, so hopefully I can provide some insight to this type of challange.

Paging works just fine for going from large scale whole earth views down to sub meter data. Precision problems are managed by having double transform matrices above each tile, and each tile having its own local origin at the centre of the tile. The camera position is also stored as a double matrix which is accumulated together with the tile matrix during the cull traversal, such that the large values cancel each other out before being passed down to OpenGL.

Paging also neatly allows you to have locally high res data where you need it, but still be able to use lower res data elsewhere. The test datasets I’ve been using use the Nasa blue marble 42x21k imagery for whole earth, which local inserts down to 0.6m in places. Total file sizes are well in to Gb for this data, but it sure looks gorgous :slight_smile:

Also with paging you can add cultral data into the LOD levels as well, so you house, rivers, roads, trees etc can also be paged in when they come into view. I haven’t yet added this support into the OpenSceneGraph’s osgTerrain module, but its something on the TODO list.

Using CLOD techniques simply doesn’t allow you this level of flexibility or ability to scale datasets to really large areas. Once you get into doing paging, 2k by 2k imagery will really seem like something from kinderkarten, and you’ll never go back.

Have fun,

Indeed, been playing myself with the blue marble dataset recently, and textures like 4k x 4k look old-school to me now. I’ve read the dataset for the “next generation” blue marble is in the works - it has twice the resolution (or 4 times the area) of blue marble, with textures of the whole Eearth down to 500m. I look forward to it, but the download will be huge (in the 4-5 Gb i’m guessing).


I’d recommend using a paged system of some kind
Any documents on this?

Check out ; they’ve got links, papers, and implementations.

The clipmap technique that Bobvodka mentioned looks very promising indeed :slight_smile:

As for paging, you can implement this with any method you see fit, though some layouts are more amenable to out-of-core visualizations (OCV) than others. SOAR is very OCV friendly, and clipmaps should work well too, I should think, especially with a good compression scheme.