That explains a lot.

So you have a field of sites, defined in double-precision math. And you want to construct a Voronoi diagram from them. At an arbitrary resolution.

I will assume that your distance mesh computations are already algorithmic. That is, for a specific resolution, you can generate a distance mesh who’s meshing error is one pixel in size.

Numerical precision problems can theoretically appear in one of two ways. They can appear in the clip-space X,Y positions or they can appear in the depth buffer Z positions. Remember: up until clip-space, you can work in whatever precision you want.

There is no possible way that you can have precision problems due to the clip-space X and Y positions. This is simply because of the maximum rendering resolution. I think most GL 3.x class hardware has a maximum viewport size of 8192x8192. 4.x class hardware may go up to 16384x16384, but I’m not sure of that.

What I am sure of is that even at 16k x 16k, you have plenty of numerical precision on the clip-space values on the range [-1, 1]. That gives you 24-bits of precision, 14 of which are used to find the pixel. That gives you 10 bits of sub-pixel precision, which ought to be plenty. Since you can’t actually see sub-pixels (unless you turned on multisampling, which I’m pretty sure is a bad idea for your needs.), any imprecision there will make little difference overall.

Remember: your algorithm will only ever be as precise (for a single rendering) as the resolution you use to render with, plus or minus one pixel.

Then there’s depth precision. Now that I understand your problem set, I can say that there are many, many games you can play to get an accurate image of an arbitrary solution. The most powerful of which I will describe below.

Now, we will assume that the computations for the 32-bit clip-space positions are all done using double-precision math. So what matters now is getting the most out of the available depth precision.

The whole thing comes down to depth range. For any particular view of the image, there is a minimum and maximum distance that matters. To get the most out of your 24-bit depth buffer, you need the depth range in your glOrtho call (or whatever other call you use to construct your orthographic projection matrix) to be as close as possible to these minimum and maximum distances. Because it is the values within this range that could potentially conflict.

What you want to do is adaptive rendering.

First, render the entire field at a relatively small resolution. Read that back to the CPU.

Then, break the field up into chunks and render each of those chunks. When rendering each chunk, you use your low resolution depth map to find the minimum and maximum depth values. Bias them a bit, just to make sure you don’t clip anything by accident.

You can repeat this process ad-nausium until you achieve the desired level of precision. You can even analyze the depth buffer values to determine where there may be depth precision problems and investigate only those areas.