3D textures

I’ve never played with those. Is it possible to have, for an animated texture, having 16 steps of animation, a nn16 3D texture, and pass a value between 0 and 1 for z texture coordinate, representing the step of the animation ?

Would it make some speed / memory improvment compared to loading 16 2D textures ? I suppose it will smoothly blend between animations steps (mip maps), but it will also lose some animation steps when texture is too much mipmapped (does anisotropic 3D textures exist ?) …

Last use I’d like to make of these is to make lightmaps appears at night fall (lamps, campfire, … are being lit at sunset). So I could strap a nn2 3D texture, and pass z coordinate depending on time (0.0 at day, and 1.0 at night).

Does this make any sense to you ?

Thanks,
SeskaPeel.

yes, makes sence…

for the animation: actually what you’re thinking of is the linear filtering between the “keyframes”. mipmapping isn’t needed for it (but usable… but yes, you’ll use animation-frames then, too).

anysotropic filtering is, like any filtering, by default, independent of the texture format… (no one mentoins float textures now, PLEASE! ). yes you can sample anysotropic in a 3d texture, as far as i know…

the lightmap thing works, of course, too.

Ok I simply didn’t get that I’ll have animation keyframe interpolation with the classic bilinear filtering. Now that point is clear.

But I still want mipmapping for the 2D part of the 3D texture (each keyframe of the animation has to be mipmapped). If I enable trilinear filtering, I’ll get nice mipmapped 2D part, but I’ll lose keyframes in my animation due to mimapping of the 3D (time) part of my texture. How can I ask for anisotropy only for the 3D part of the texture ?
In clear english : I want to have 2D mipmapped texture sampled from a 3D texture, without having that 3D part mipmapped.

Why do I insist on this is because of the lightmap thing. If a bit of the lightmap is fetched at day, due to unwanted mimapping of the 3d texture, this will be a very nasty visual effect.

What about the speed or memory improvment ?

that would be ripmapping then (i think)… or anysotropic mipmapping, or what ever…

this is not possible, afaik. would be nice, yes…

but lightmaps normally don’t need mipmapping anyways, as they are so lowres…

To minimize texture storage, I was actually thinking of a precomputed (diffuse * lightmap) texture, and this ones definitely needs mimapping.

The first slice of the 3D texture would contain the “day diffuse” (if we had two separate textures instead of a single 3D one, the lightmap texture would be completely white), and the second slice (only two slices since this is a nn2 3D texture) would contain the “night diffuse” texture (if we had two separate textures, we would have the same diffuse texture as in the “day” one, and a black & white lightmap texture).

I could use a 2D texture for diffuse, and a 3D for lightmap, but it makes more sense to me to store all this in a single 3D texture.

But this won’t work because of mimapping …

SeskaPeel.

And for animated textures, these are supposed to be normal map textures, or maybe diffuse ones. Or even anything.

So I might need to mipmap them too (to mipmap separately each slice of the 3D texture). Now what will happen with this 3D mipmapping issue ? I will lose some keyframe in my animation, when texture is too heavily filtered.

How is the anisotropy level computed for the third coordinate ?

Originally posted by SeskaPeel:
To minimize texture storage, I was actually thinking of a precomputed (diffuse * lightmap) texture, and this ones definitely needs mimapping.

you don’t want to do that… normaly, diffuse textures are highres, but reused all over the meshes (all the same texture on all walls for example), while the lightmap is lowres, but unique on each triangle… the mix of both creates a unique, but detailed texture on each triangle without much memory usage…

think about it… why would all the games do it like that else?

Originally posted by SeskaPeel:
How is the anisotropy level computed for the third coordinate ?

anyso is just depending on the surface normal in eye space of the pixel of the triangle… (more or less, dunno how they actually do it. but anyso mathers depending on how a triangle is facing towards you). it is independent on the actual texture, and just generates several full sets of (4d) coordinates, where it samples from.

thats theory, i don’t know how in practice it got approximated… but its actually independent of texture…

Hmm yes I understood the anisotropy filtering in the nvidia white paper. I suppose it will work with 3D (animated) textures if the 3 vertices of a face all have the same Z texcoo.

And for the diffuse texture stuff, well actually this is for a non repeating map, and the lightmap is to be considered as a “detail” map too, so it makes sense in my particular case.

SeskaPeel.