Future of OpenGL/GPUs

Megatexture still has a long way to go. It may work for rage, but it suffers from:

  1. Being very static in nature (changes to geometry are very costly)
  2. Materials are also pretty static.
  3. Lack of support in tools
  4. Need a huge amount of artists.

Also, i don’t see why it won’t benefit from occlusion culling… I’m sure rage uses occlusion techniques.

Getting off topic here but whatever…

  1. not more static than texture coordinates. Ideally even for classic texturing, whenever something deforms, both the wrapping and the texture map should change to provide more texel definition in stretched areas. In practice, within reasonable constraints, this is not much a problem.
  2. on the contrary I think it is even less a constraining with megatexture than when one uses deferred shading. Megatexture only define how texture are sampled, not how shading is performed. And as said above, both methods are orthogonal.
  3. well yes. But AFAIK, baking to megatexture is only needed at the end of the assets pipeline. And in the shader access to texture. So it is done “under the carpet”.
  4. ? The point of megatexture is to allow (almost) unconstrained texture resolution on constrained hardware memory. Creating big textures can done even with programmer art :slight_smile: Sure it is better with good artists, but it is the same for classic textures, models, animations, sounds… Just less skills needed to balance texture size and resolution across levels : throw high res digital photography here, a rasterized vector signpost here, a low res hand drawn pic here, … it will all run at the same speed.

Maybe I am wrong ?

No, you’re absolutely correct.
In fact virtual texturing is basically nothing more than an advanced form of texture management.

You could put in all your textures and re-use them over geometry just like you would’ve without virtual texturing.
(well you’d store them like that on disk, the addressing would probably have to point to the same pages several times when a texture is repeated, the disk cache would only store each page once however)
If you keep the pages unique in the on GPU page cache you can even render into it, turning it into a shaded cache (I still need to try that).

A big dynamic 2D texture atlas from what I’ve seen (128x128 textures, 128k^2texels, so 1024x1024 res textures). Anyone got a pointer to a detailed write-up of it?

Seems that one disadvantage of it is the lack of support for MIPmap-based hardware texture filtering (aniso, etc.). I’d like to see more on how they did filtering than was revealed at SIGGRAPH last year. Also, appears that it might impose a close tie between vertex density and texel density. Since all the virtual textures were the same res, then seems you needed to subdivided 2x2 everytime you needed to CPU-fade into the next texture LOD, unless there’s some texture/texcoord magic going on here.

However, if you can take the disadvantages, it definitely simplifies preallocation and updates of GPU memory for texture paging. One other cool thing was how they used DCT on disk and then transcoded that to DXT dynamically at run-time when paging.

You’ll find a lot here:

http://silverspaceship.com/src/svt/

Also, see LogicalError’s blog (link above) and

here:

http://www.linedef.com/personal/demos/?p=virtual-texturing

Thanks.