Someone recently mentionned using megatexture for terrains, which seems not too hard to do on the engine side, especially after reading the SGI paper on clipmaps.
But what about arbitrary meshes ? How can texture virtualization be done efficiently ?
I found some speculations on the topic (sorry, first link is machine-translated from french) :
A 3 years old paper about the topic :
Do you find the speculation plausible ?
Anybody have better ideas about this ? Comments ?
Very interesting article, thanks for the link. I will also read the paper.
yes it is, i read both of them. i don’t know for today gpu if the paper (about finding the tile to upload/use a tile) is true. Because you need to do it more time on the paper. Maybe today downloading a texture of 1000x1000 like in the french paper in one pass is simpler and fastest. Anyone has tried something similar ?
If it’s only 1000x1000, sure. The applicability of the paper is for dealing with textures more along the lines of 100000x100000.
Lindley, the 1000x1000 rendering only represents one texture page per fragment, each page could be ie 128x128 to bring a grand total of 128k x 128k mega texture.
The video that goes along the paper explains it nicely :
The described technique of rendering the geometry according to its 2d texture coordinates, to determine which pages to use, seems to have a real drawback to me : no occlusion.
And what about doing a normal rendering of the scene to rgba framebuffer + storage of additional data (like for deferred shading).
The extra data would be 2d texcoords (would need at least 16 bits per coordinate) + desired mipmap level (for example according to dx dy texcoord derivatives, would allow anisotropic filtering). Then readback of these extra buffers by the cpu, to update virtual texture for next frame. In this way, we are always one frame late from the perfect determination of visible pages. But I think it would be worth the extra performance (only one rendering pass).
What I quite don’t grasp is how to handle properly the texture borders to avoid bleeding …
Btw you can check out the Enemy Territory : Quake Wars demo, for the simple version of megatexture (terrain only). The terrain texture updates with a very subtle crossfade (easier to see at low detail settings).
In the paper he talks about an occlusion approximation for texture fetch. But when i read this paper it does not solve a big geometry management.
If there is a lot of geometry i thought about determining which geometry (tile) to fetch with the cpu (eg kdtree build with bounding box). if we fetch geometry with a selectionned LOD (eg camera dist with frustrum against the kdtree) then you can have texture tile information encoded in the geometry tile, no ?
If you already have that on the cpu side then you dont need to determine that on the gpu part. It seems to work no ? maybe it’s more accurate on the gpu with the technique describe in the paper. And with the method i thought it’s not enough accurate to determine texture/geometry tile, and the consequence is it does not fit in memory or it fit but with too low resolution data.
I dont understand why the deferred shading should be a problem. I mean when you have built your texture cache on the gpu (can be normal map + detail texture). you can render on the first target color, and on the second the normal. And you are not forced to have the same resolution of normal map (can be a 64kx64k or less) or extra other data map. Maybe i am naive and i missed somthing.
What do you think about that ?
Yes, very interesting indeed.
I tend to view this concept more generally as an optimized streaming operation, much like traditional OCV, but where the content need not be in any particular format. The trick then seems to be streaming the view-dependent data into textures/buffers without stalling the GPU, which seems eminently doable with the current hardware and some clever multi-threaded techniques.
I don’t have details to offer, but if I did I’d probably charge a modest fee for them
Sorry, OCV = out-of-core visualization.