Megatextures?

Andras, I believe they may be as low as 5cm. I haven’t done checks to see what the desired resolution at the 1ft range is, but its probably close to that number. The scene is composed of various high resolution rasters. The resulting raster takes on the highest resolution from all the other rasters when its mosaiced.

This should also explain VikingCoder’s question. Its damn near impossible to store all that data as one huge raster. Instead its composed of various high resolution raster sources which get composed into 1 enormous raster at runtime.

As far as the renderer is concerned, it only knows about that 1 huge raster though. The individual sizes range from 30k to 100k+, and there can be hundreds of them.

i was just writing about this today, bigger textures aint really whats needed, hows that gonna solve the problem that most (all) games suffer from that the closer u get to the wall the worse it looks. im
looking outside at the moment the grassfield i see doesnt need a 32k x 32k unique texture. visually u can achieve a 99% similar result with 4detail textures and a blendmap but with either method its still gonna look crap up close.
it would be nice if we had some sort of lossy compression built into the cards dxt3/5 give a 4:1 ratio but i want 40:1

[b]The scene is composed of various high resolution rasters. The resulting raster takes on the highest resolution from all the other rasters when its mosaiced.

This should also explain VikingCoder’s question. Its damn near impossible to store all that data as one huge raster. Instead its composed of various high resolution raster sources which get composed into 1 enormous raster at runtime.[/b]
Yeah, so this is “logical” resolution. I also have a renderer that can render images at milimeter/pixel and kilometer/pixel resolution composed at the same scene. This doesn’t make the runtime image resolution bazillion squared :slight_smile:

bigger textures aint really whats needed, hows that gonna solve the problem that most (all) games suffer from that the closer u get to the wall the worse it looks.
Bigger textures is, in fact, how you solve that problem. Or you drop your screen resolution (or just stop increasing it to rediculous levels like 16x12 and higher).

Compression has nothing to do with the visual artifacts you are referring to. It is due solely to bilinear filtering of a texture that is, relative to the screen resolution, too small. The screen-space triangle is trying to pick out pixels in the texture that just aren’t there, so it uses bilinear filtering to make them up. And, while it’s better than point sampling, it’s not as good as having bigger textures.

If you increase the size (and the detail. Assuming competent artists who know how to use a bigger texture), then you can add those details to the texture.

Originally posted by andras:
[quote]he’s likely using geometry clipmaps, with on-the-fly decompression - it’s one of the big advantages of geometry clipmaps.
So the data is heavily compressed on disk.

Huh? At what compression ratio? 10000:1? :slight_smile: I’m sure it’s just a typo. Divide both dimensions by 1k, throw in 40:1 wavelet compression, and then it sounds reasonable.
[/QUOTE]Thanks for doing the maths.
Well, with geoclipmaps you can procedurally add detail, so you could achieve these kind of resolutions - although technically it’s not real data, only an interpolation with added noise. :wink:

Bigger textures is, in fact, how you solve that problem. Or you drop your screen resolution (or just stop increasing it to rediculous levels like 16x12 and higher).
that doesnt solve it as you know, it only reduces the distace it becomes aparent,
btw the lossy texture suggestion wasnt my solution but another thing ild like added to graphics cards, imagine having the ability to have 4096x4096textures everywhere.
what im talking about is very cpu intensive fractals etc ok its easy to do rocks/plants etc. but i believe even stuff like humans is doable, what is an arm, a piece of skin with thousands of similar hairs and wrinkles etc on it, u dont need to model each hair, just the one + give a range of distribution + range of variability

that doesnt solve it as you know, it only reduces the distace it becomes aparent,
And if you prevent the player from getting closer than this distance, the problem is solved.

what im talking about is very cpu intensive fractals etc ok its easy to do rocks/plants etc. but i believe even stuff like humans is doable, what is an arm, a piece of skin with thousands of similar hairs and wrinkles etc on it, u dont need to model each hair, just the one + give a range of distribution + range of variability
But that doesn’t really solve photorealism, as by the time you could even consider noticing things like hairs on skin, you’ve got more important things to worry about. Like the fact that the rock’s surface is perfectly flat, or (employing bump mapping or relief mapping) that its edges are flat. Or that they’re polygonal.

By the time a large texture is no longer capable of providing the appropriate details on a surface without bilinear filtering, other issues about that surface become more apparent. To the point of wanting to employ displacement mapping.