large terrain texturing

I currently have a ROAM terrain engine working with support for geometry paging which allows for large terrains.Problem is I’m not sure how to texture map this.I was thinking of a unique texture for each terrain tile(say 256x256) and one or more detail textures.There should be no problem with scenes containing few tiles but with panoramic views from a high altitude the textures propably won’t fit in memory.I could theoretically do mipmap level paging since for the latter scenes I won’t be needing the high-res mipmaps but opengl doesn’t seem to let you do that.The texture-lod exetension won’t help here as it only lets you specify which levels to use not which ones to keep in memory.So do you know of any other solutions?What do developers in all those games with large terrains do?

There are two separate pairs of parameters:

How to calculate which MIP map level to choose (TEXTURE_MIN_LOD and TEXTURE_MAX_LOD).

Which MIP map levels are valid in memory (TEXTURE_BASE_LEVEL and TEXTURE_MAX_LEVEL).

You pretty much have to implement LOD. The terrain that is further away from has a courser grid (each tile covers a larger area) and you texture it with one image that covers the entire area (a courser texture).

jwatte:BASE_LEVEL and MAX_LEVEL specify which levels are valid but not which ones are stored in memory.According to a driver developer working for nvidia they store all levels regardless of these parameters.
nickels:each tile covers the same area but something like that might be possible.Any docs on how to implement texture LOD?I’m not sure where to begin.

zen: Yes, I followed that thread with Matt.

The trick is that when a level is not valid, you can specify a very small texture image for that level (using a NULL data pointer) so it won’t actually take up any space.

So you are saying that if you specify a NULL image to glTexImage the level will be made non resident.

This seems a bit arcane, I’m skeptical about this working. You have no control over image size at each level, it is determined by the base size and the level. You aren’t talking about specifying a small image you’re specifying no image.

You’d still need the LOD controls to make the level legal AFAIK, otherwise you’ll get a white texture. I know this is what happens when you never make the image load call for a level, I’m not clear on what the status is if you explicitly specify NULL, I have a feeling it should be an invalid texture without the LOD clamp but haven’t tested this.

Can you calrify where you heard this, can anyone working on implementations confirm or deny that this would save memory?

[This message has been edited by dorbie (edited 02-10-2002).]

My understanding of Matt’s (nVidia’s) comments in the mentioned thread was that a texture with any mipmaps will always take up the same amount of memory as one with all mipmap levels specified and valid. “NULL-ing out” a level doesn’t work and neither does using the BASE_LEVEL/MAX_LEVEL stuff. You are stuck with the full memory usage no matter what…

I think that they won’t work on that kind of feature ever, instead looking to a future with demand-paged “virtual” texture memory (which solves even more problems)…

That’s also my working assumption on NVIDIA cards, although other implementors may have better texture memory managers.

[This message has been edited by dorbie (edited 02-10-2002).]

Yes that’s what I understood as well although Matt never said what happens when you NULL out the largest level ,resize it to 1x1 and use the LOD controls to specify that it is not to be used.Still it looks like a crap shoot to me and should be implementation dependent.Still how do I go about implementing texture LOD with opengl?There has to be a way.Do you know of any usefull docs?

> You have no control over image size at
> each level, it is determined by the base
> size and the level.

I believe the spec says I do. The only twist is that the texture is not MIP map complete (and thus considered disabled) if the sizing of MIP levels that are active doesn’t conform to the expected pattern.

After all, glTexImage2D() takes both a MIP level and a size for each MIP level. Only when you use glut to generate the MIP maps (which you shouldn’t) or SGIS_generate_mipmap (which is OK if it works for you) will something other than you specify the size per level.

As far as the nVIDIA implementation always reserving all the memory, that sounds unfortunate. But surely not everyone takes that short-cut?

[This message has been edited by jwatte (edited 02-11-2002).]

I think that the size(and levels) of the mipmap is determined by the size of level 0 so you can’t set the size of level 0 to 1x1,can you?.And besides if it isn’t mentioned in the specs that the space to be allocated is that specified with glTex*(e.g the nvidia drivers don’t seem to do so),it doesn’t seem like a good alternative.If the driver doesn’t behave as expected then the whole texture system wouldn’t work and you’ll end up with an app which rocks on an nvidia but sucks on an ati or with some other implementation like the mesa.How do people implement texture LOD with opengl anyway?


I can see nothing in the spec that says that the size of each MIP map level is determined by the image size specified at level 0. Can you find a page in the spec PDF, version 1.3, that’s available on this web site to point me at?

As far as I can tell, OpenGL supports loading MIP maps progressively from smaller to larger, by using BASE_LEVEL and MAX_LEVEL. That way, if you can load MIP maps from disk in that order, you only need to upload as much data as is actually necessary (if you can calculate “necessary” based on, for example, distance). That seems to be specifically what the BASE_LEVEL extension was initially intended for.

MIP mapping will work as long as textured from BASE_LEVEL to MAX_LEVEL are well-behaved (defined and following the usual 1/4-size pattern per level). No implementation should NOT work using these parameters, assuming the implementation claims to be OpenGL 1.3, or it claims to support the extension.

All that remains is the problem of what the nVIDIA implementation does in this case – it sounds as if it puts undefined bits in the levels below BASE_LEVEL, and wastes your VRAM with data that’s not going to actually be referenced. Not much you can do about that, though.

jwatte:I’m not sure about that level 0 part.I just think I’ve heard it somewhere.But I’m propably wrong.Yes what you suggest will work.The problem is wether it will save on texture memory usage in most cases or not.I was wondering,in nvidias case, if I use for some time many textures so that they won’t fit in VRAM but I only use the lower levels,will it just start moving whole textures in an out of memory and trash performance?According to Matt(the driver developer) we should leave most of the texture management to the driver so it should be able to handle situations like these…
Anybody here has succesfuly attempted a texture LOD system with opengl?

Yes but this was just him saying don’t play with the matches, NVIDIA knows best. A driver cannot know what an application knows as even a newbie on that thread pointed out. If you intend to significantly oversubscribe texture memory I still suggest you should try to manage this in the application. In particular pay close attention to the total texture used in each frame of rendered data. There was an interesting compromise I posted at the end of the thread in question concerning an application being smart about the load it places on the graphics system while letting the driver handle the immediate paging requirements.

It would be a real bonus if NVIDIA used the tools provided by OpenGL to control level residency, but apparently they don’t.

[This message has been edited by dorbie (edited 02-12-2002).]

Hello everyone,

See my old post ( for a discussion on NULLing a mipmap level.

I was very discouraged on following such an
approach , although I did saw a significant
increase in performance and I decided to
drop the feature due to artifacts.

I believe the problem was that I was setting
the BASE_LEVEL and MAX_LEVEL but not the
MIN_LOD and MAX_LOD at the same time. Calculating MIN_LOD and MAX_LOD was difficult
for me and I dropped the feature.

Do you think I should revive my old code?

Ì was wondering…say you have a large terrain.The textures are tiled so when you’re close to the terrain typically only a few tiles are visible,therefore texture management is no big problem.Say you look straight down at the terrain an start ascending.More and more tiles get visible but the mipmap levels of the textures they use become smaller.How do nvidia-style drivers handle this.
dorbie:basically the above-mentioned case is what concerns me most.So I assume I might oversubscribe my memory.What post are you reffering to?The one with the small and large textures approach?
Theodore:You might want to give it a shot although it will be driver-dependent and it will propably not save any texture space.Try resizing the nulled levels to 1x1 as jwatte said.

I’m doing texture LOD in my flight sim by killing the whole texture and recreating it with correct level of detail. It works and it’s reasonably fast. You can improve performance by reusing unneeded textures and using TexSubImage.

alex:Is there documentation with implementation details available?
I think though that our problems are not exactly the same.Your terrain should be much larger than mine so I assume you have to deal with fitting the texture into system ram as well.My problem is closer to mipmapping than texture LOD.That is I’ll usually have few textures visible(camera will be close to the ground) but occasionally I might have a lot of textures at low detail levels.Theoretically,for reasonably large terrains(say a 5kx5k heightmap) and view distances(i.e distance of far clipping plane) the needed mipmap levels should always fit in vram.Since the driver won’t let me do that myself I hope it will do it for me.

Zen, yes I’m texturing very lage terrain and I have to fit very lage texture into the system ram too.
I have no documentation, only source code. I can mail you parts of it if you need it.