Rendering artifacts at big image texture boundaries

Hi there,

I’m fairly new to OpenGL rendering stuff, so this might be quite a basic thing to solve, or alternatively quite difficult. I’m not entirely sure, so it would certainly be useful to have the perspective of an expert!

For rendering large digital microscopy images, we need to split them up due to their huge size. Some of the 2D images are over 100000×100000 pixels, so even if the GPU has sufficient memory, it’s exceeding the max texture size by a long way. For 3D volumetric images we can again exceed the max texture size and also the GPU memory for multi-gigabyte images. The obvious thing to do is to split the images up into a series of small tiles and render them as a set of aligned quads (which is our current approach). In the case of 3D images, small cubes are used instead of tiles, though we haven’t got this finished yet; here we would be using proxy geometry and/or ray casting to render the volume.

Within a single texture tile we get correct interpolation between pixels using GL_LINEAR for magnification and GL_LINEAR_MIPMAP_LINEAR for minification, and it looks just fine in both the 2D tile and 3D cubic texture cases. However, this all changes when we split into separate textures. Using GL_CLAMP_TO_EDGE, it looks like you will get seams between quads where there will not be any interpolation unlike within the quads (bad UTF-8-art):

                 ┌────┐ ┌────┐
┌────────┐       │····│▒│····│▒      ┌────┬────┐
│········│▒      │····│▒│····│▒      │····│····│▒
│········│▒      │····│▒│····│▒      │ab··│····│▒
│········│▒      │····│▒│····│▒      │····│····│▒
│········│▒      └────┘▒└────┘▒      │····│····│▒
│········│▒  ─→   ▒▒▒▒▒▒ ▒▒▒▒▒▒  ─→  ├────┼────┤▒
│········│▒      ┌────┐ ┌────┐       │····│····│▒
│········│▒      │····│▒│····│▒      │···c│d···│▒
│········│▒      │····│▒│····│▒      │····│····│▒
└────────┘▒      │····│▒│····│▒      │····│····│▒
 ▒▒▒▒▒▒▒▒▒▒      │····│▒│····│▒      └────┴────┘▒
                 └────┘▒└────┘▒       ▒▒▒▒▒▒▒▒▒▒▒
                  ▒▒▒▒▒▒ ▒▒▒▒▒▒

Here, we have split an 8×8 texture into 4 4×4 textures. When these are rendered as adjacent quads, samples a and b will have correct interpolation. But there will be no interpolation between c and d because the 2D sampler won’t have access to the bordering texture, nor any knowledge of it. Likewise for all shared surfaces of cubes for 3D textures. What we really want here is for output to be pixel-for-pixel identical to the original case where the texture was not split and the samples were adjacent in the same texture. In the 3D case we would need to be able to ray cast through all the sub-cubes on the light past to do correct volume rendering; the cast ray can be re-started on the boundaries using proxy geometry, but we still have the same requirement for the 3D sampler to sample correctly at the texture boundaries and at different mipmap levels.

It also gets more complex when considering mipmaps since they will also be split (and if computed independently for the split tiles, will differ also).

I’m not sure if this is a common problem for other OpenGL users; the artifacts are probably not noticeable visually, but given that it’s for scientific visualisation, accuracy is a primary concern. I’ve read a few papers using OpenGL for splitting large volumes into smaller subsampled cubes, but they don’t appear to acknowledge or address the above issue.

If it’s possible to do more accurate sampling in the fragment shader using multiple textures, that would be something we could look at doing, but from my limited knowledge of the samplers the lookups are done in hardware by the texture unit including mipmap interpolation. If we could make that sample and interpolate between adjacent textures, that might be a solution. But maybe I’m not on the right track and there’s an even simpler solution.

If you have any insight into what we could do here, it would be very much appreciated!


I am not an expert here but I have read a few things that might help. Some of the issues are ones that are also common to texture atlases (ie the opposite - putting several images into 1 texture). One article I read from nVidia gave a formula for given the correct u values for a whole image as not 0-1 but 0.5/texture_width_in_pixels to 1-0.5/textures_width_in_pixels and likewise for v. Also you can sample across the boundary by expanding each texture to include some of the surrounding textures and altering your uv to reflect this.

This is the white paper

Thanks for the suggestion and the reference, which I’ve read through. While I can see that expanding the texture size and adjusting the uv coordinates will work at the highest level of detail, I’m unusure this will work when mipmaps are used for lower levels of detail due to the fact that the mipmaps won’t be generated to be identical to those in the original large texture, and so this could well result in sampling artifacts.


mipmaps are always a problem - this requires a border the size of the surrounding textures to guarantee correct sampling. Since this is not usually practical some level of artifacts must be expected. Of course you can have 9 textures loaded and do your own sampling in the fragment shader. It will be slower but will solve your problem.