Megatextures?

He wasn’t referring to 3DLabs implementation of 32x32 pixel blocks, but to John Carmacks claim to use a 32K x 32K texture on the terrain.

Jan.

Originally posted by Jan:
He wasn’t referring to 3DLabs implementation of 32x32 pixel blocks, but to John Carmacks claim to use a 32K x 32K texture on the terrain.

Sorry about that confusion – I assumed he was referring to the more recent posting relating to the typical system page size since he used RGBA in his calculation (the post about Carmack’s presentation did not mention image format - RGBA or otherwise). In any event, I highly doubt the entire 32Kx32K texture would ever be loaded into memory all at once. Tiled image formats are also old technology.

– tranders

Originally posted by Korval:
]So, if I want to use a shader many times, each time with different textures, I have to either pass the texture object

You would not need to. The texture coodinates would tell you. Unless of course, you want to read directly from the texture yourself in which case you would just bind it yourself just like you do today.

To support that, there would have to be 2 types of samplers : The current samplers that uses 0-1 ranges to look up into a texture and a new type that can read straight from virtual addresses. Note that the standard sampler would actually transform the 0-1 address to a virtual address to read into the binded texture.

The only way you could have virtual addresses would be from a varying attribute. The compiler would have to enforce that or if someone use the wrong sampler, the object would be strangely colored.

Pre-transforming a buffer of texture coordinates would not be costly if using VBO.

Of course, If dynamic texture coordinates are required, you can just revert back to normal binding.

Static coordinates are quite common in my experience, so pre-caching the coordinats in a VBO would remove lots of useless binding in the application and improve batching.

For example, if you have a shader that does the interaction of one light and a surface, you would only need to bind the light specific textures. You could then batch all the surfaces that have static textures coordinates(but could have widly different textures) and are touched by that light.

On the other side, if gpus ever become good at rendering small batches, then what I am talking about is not useful and virtual texture memory would only be used for large textures.

I would prefer this, but who knows where technology will go.

which will probably have the same problem in terms of creating stalls as binding the texture to begin with.

You get stalling when the texture needs to be loaded on the card. If you use virtual texture memory, you read pages. So stalling occur at the page level. So the stalling problem is not removed or exacerbated, it is moved.

The point is to make it nicer for application developers.

I’m not impressed at all by everything i’ve read about it so far, and i think some of you are seing complexity where it actually is pretty simple.

“Megatextures” sounds awfully like some marketing garbage to me. If you keep it technical, i think it’s basically a terrain engine (i don’t even think it’s as advanced as clipmaps - CLOD or geomipmapping are more likely) with a huge texture mapped on it.

Now, a 32k x 32k texture certainly doesn’t fit in memory. So what ? You can easily cut the terrain in 32 x 32 seamless sections, and apply a single 1k x 1k texture on it… on close sections. For further sections, you can use lower resolution textures with a cache, just like mipmaps (except in OpenGL, the full mipmaps hierarchy fits in memory). Add some streaming and blending to avoid popping, and you’re done!

All in all, that’s probably no more than a few weeks of work for a good graphics programmer.

I’m not trying to minimize Carmack’s work - what i just say is that it’s not as innovative and new as some people would like you to believe.

Y.

You would not need to. The texture coodinates would tell you.
That makes no sense.

Without the texture unit having a base pointer to a texture, it has no way of converting normalized texture coordinates (0 to 1) into an address in memory. Giving the system this pointer is the principle function behind binding a texture.

Unless texture coordinates become virtual memory addresses (never going to happen), you still need to know which texture is being referred to.

I’m not trying to minimize Carmack’s work - what i just say is that it’s not as innovative and new as some people would like you to believe.
I’m sure people said the same thing about shaders. That, if you need to do shader-type stuff, you can (in software. Possibly by writing to a texture). It’d only take a few weeks worth of programmer time too.

The point of it being in hardware is to make the process both automatic and more efficient. The algorithm you propose, for example, is not terribly efficient compared to a hardware-based one. The hardware algorithm (assuming the texture fits into main memory. Having it go to disc murders performance) can dynamically load only the necessary chunks of the data into video memory. The unneeded bits don’t even get touched. Your algorithm can’t tell the difference, so it uploads mip levels and texture regions that may never be needed.

There are a few things you should know;

Firstly there are many ways to do this, it has been done for years on PCs.

Paging on demand sucks because the data can be huge, bigger than system memory and by the time texture requests demand the memory it’s way too late.

Paging in anticipation of demand, load management and reasonable fallbacks when you don’t have the best data are key.

One example of texture virtualization is to torroidally scroll a texture region and use vertex coordinate manipulation to adjust the virtual coordinates to the current torroidal window, the key is not to try to texture stuff outside your torroidal window at that resolution, and to leave an unused buffer region for active paging. Once you do this other issues come to the fore, like MIP lod selection and management. That too has been solved creatively:

http://patft.uspto.gov/netacgi/nph-Parse…ie&RS=IN/dorbie

Ever seen the Google Earth client? That’s doing a primitive version over the web for a single texture that’s the size of the entire Earth at a resolution up to 3 inch and it works pretty well considering.

There are other ways to virtualize textures:

http://patft.uspto.gov/netacgi/nph-Parse…ie&RS=IN/dorbie

Originally posted by Korval:
[QUOTE]
Unless texture coordinates become virtual memory addresses

That is what I was talking about.

It can happen. You just need a transformation step when you upload the texture coordinates to a VBO.

Dorbie responded before I could, but as he says, there is nothing new about this. There are quite a few GIS applications and algorithms which do the same exact thing, and they have existed for over 10 years now (my company’s product is that old and the technology existed back then…on 8MB graphics cards). Dorbie isn’t the only one with patents out there on this kind of tech as well, there are lots of them.

32K x 32K pixels in the GIS world is nothing to gloat over. I am looking at file dimensions of 259,823,625 x 129,926,368 pixels right this instance. Essentially every pixel is unique as well, and I can add decals, lighting, lightmaps, etc… to it just like you would in a regular gaming atmosphere. Keep in mind that this is only the imagery, the terrain is also of this high resolution as well. The toughest part isn’t actually rendering the data I would say, but overcoming the bandwidth issues. Especially when it comes to streaming it over the internet (which you will find plenty of patents on as well).

You can go through some of the commercial packages found here to see some of this tech : http://www.vterrain.org/Packages/Com/index.html
Or as Dorbie pointed out, check out Keyhole/Google Earth, or even TerraeExplorer (voxel based…lots of patents on their tech).

In short, all I am saying is that don’t want to see JC credited for this if it doesn’t provide a new method for visualizing large terrain datasets. If it does come out and it proves to be something brand new, I will give him his due respects.

I am looking at file dimensions of 259,823,625 x 129,926,368 pixels right this instance.

I’ve got to say this. Daaaaaaaaaaamn. :eek:

-SirKnight

It can happen. You just need a transformation step when you upload the texture coordinates to a VBO.
No, it can’t. Memory addresses cannot be linearly interpolated across the surface of a polygon; real texture coordinates can. Plus, during this interpolation, you might switch from one mipmap level to another, which would require a huge jump from one memory address to another. And heaven help you should you be doing anisotropic filtering on that texture, as that will literally do multiple texture accesses all across memory.

Plus, IHV’s aren’t going to give us addresses to virtual memory to begin with. It would expose us to a level of hardware that we have no buisness accessing. Things like texture swizzling and so forth, which are IHV dependent, would have to be defined for us in order to convert texture coordinates to virtual addresses.

So yeah. This is not a technique for being able to batch more; that’s not what this is about.

In short, all I am saying is that don’t want to see JC credited for this if it doesn’t provide a new method for visualizing large terrain datasets. If it does come out and it proves to be something brand new, I will give him his due respects.
I think there’s kind of a misunderstanding here. Practical virtual texturing (hardware-based) isn’t really for the purpose of visualizing large terrain. It exists to allow the user to more freely use more texture memory than the card normally has and amoratize much of the performance loss due to thrashing. Basically, it lets you use more/bigger textures without having nearly as much penalty as normal.

That’s its purpose, no matter what JC or anyone else wants to do with it.

I apologize if there was a misunderstanding. From the initial post, I thought thhis is what it was being targeted towards.

“where you’ve got a terrain model that has these enormous, like 32000 by 32000, textures going over them.”

yeah, it seems like what you want here is a really big texture atlas, only witout the problems associated with them. the out-of-core stuff should be handled in software. the hardware could handle largish atlas objects that could be allocated and mapped dynamically by the application. if asked, i plan to be vague and illusive on the implementation details.

Originally posted by lxnyce:
You can go through some of the commercial packages found here to see some of this tech : http://www.vterrain.org/Packages/Com/index.html
Or as Dorbie pointed out, check out Keyhole/Google Earth, or even TerraeExplorer (voxel based…lots of patents on their tech).
[/QB]
There are open source solutions to large texture support/database paging too. Even a number of the above commericial project linked to from are based on open source database paging support undernearth :slight_smile:

See http://www.openscenegraph.org, there’s a little howto guide on howto generate the database - just do a search for osgdem.

The support currently implemented is based on paging geometry and textures together. In the future I would like to docuple the texturing and geometry paging, so that we have something close to virtual textures.

I have to say that I treally don’t think it there is much value in using virtual texture as a texture atlas. Fetching the required data required from main memory, let alone disk or over the web, incurrs a big latency hit.

You have to hide this latency as much as possible by paging on the CPU in the background, incrementally downloading datat to the GPU, and where possible using predicative kowledge about what data will be needed prior to being used. All this requires high level support for paging, not low level OpenGL support.

Better hardware support for paging textures and geometry wouldn’t go amiss, but its never going to replace the high level side of things, it’ll have to work in unison to it. There also isn’t any one specific bit of hardware you excerise as paging stresses the whole CPU and even network, it really isn’t just a case of next gen GPU solving all thats required.

Things that would sure help out on the GPU side would be decompression (Jpeg2000 style) of imagery and geometry down on the GPU. I’ll take this over any virtual texture support.

Robert.

No, it can’t. Memory addresses cannot be linearly interpolated across the surface of a polygon; real texture coordinates can.
They can be interpolated easily if virtual coordinates are 2d and sequential.

The more difficult thing is the texture matrix and vector addition. Just that alone might probably not make it worth it because you would have to transform back the coordinates to 0-1 range and back again.

Plus, during this interpolation, you might switch from one mipmap level to another,

Again, That is just more work to the hardware to find the proper memory location.

And heaven help you should you be doing anisotropic filtering on that texture, as that will literally do multiple texture accesses all across memory.
I don’t see your point here. That will happen even if you use a 0-1 texture range in a virtual texture memory.

Plus, IHV’s aren’t going to give us addresses to virtual memory to begin with. It would expose us to a level of hardware that we have no buisness accessing.
I don’t actually care about the address. The hardware can keep it any format they want. They can simply return an error if we attempt to read the vbo of transformed texture coordinates.

Korval, I think we went over all the pros and cons of this technique. It has a more cons (especially for the hardware), so unless this gives a substancial performance boost, I’d much prefer fast small batches.

So I will let it rest.

32K x 32K pixels in the GIS world is nothing to gloat over. I am looking at file dimensions of 259,823,625 x 129,926,368 pixels right this instance.
You mind if I ask what does this dataset represent? Because if you had the full Earth at 1ft resolution(!!), it would still be a lot smaller than what you claim…

One example of texture virtualization is to torroidally scroll a texture region and use vertex coordinate manipulation to adjust the virtual coordinates to the current torroidal window, the key is not to try to texture stuff outside your torroidal window at that resolution, and to leave an unused buffer region for active paging. Once you do this other issues come to the fore, like MIP lod selection and management. That too has been solved creatively:
Any papers about these techniques, or do we have to learn to read this patent cr@p written in lawyenglish? :wink:

Pleas don´t forget about other uses for virtual gfx card memory, too. (Huge Geometry!).
I want to throw in that Carmack also stated, that he wanted to abandon texture tiling (in long term) by basically textureing every polygon with a unique artist-painted texture. That would of course result in say 1 big texture atlas per “room”. But since we only have 1-2 millions pixels visible on screen, the actual referenced texture memory stays rather constant (some multiple of the framebuffer size, I guess). Wouldn´t a 256MB card be a rather sufficient “cache” for this type of usage pattern?
Same counts for the geometry data. You can create a huge detailed world and put it partly (few big chunks) or as a whole into the gfx card memory.

259,823,625 x 129,926,368 pixels?

Even at 8-bits per pixel, that’s 31,439,531 Gigabytes of data. (Yes, 31 Million Gigabytes.)

Pull the other one.

he’s likely using geometry clipmaps, with on-the-fly decompression - it’s one of the big advantages of geometry clipmaps.
So the data is heavily compressed on disk.

he’s likely using geometry clipmaps, with on-the-fly decompression - it’s one of the big advantages of geometry clipmaps.
So the data is heavily compressed on disk.

Huh? At what compression ratio? 10000:1? :slight_smile: I’m sure it’s just a typo. Divide both dimensions by 1k, throw in 40:1 wavelet compression, and then it sounds reasonable.