Need help understanding "dithering" better

So I wish to implement dithering as a blend mode between my cascade shadow map splits.
I had no idea what they were so I’ve watched this video to try and understand it. As far as I understand it it’s a way to map an image colors to a limited pallet while trying to maintain a convincing gradient between different colored pixels.

Now from this video I understand how to calculate what color my eye will see based on the weights of the dithering pattern. What I do not understand is how we take an image with 4 bytes pixels data and for example trying to map it to 1 byte pixel data. How can we map each pixel color in the original image to a dither pattern that it’s weighted average will look as if it’s the original color if we’re basically limited? Say we were limited to only 5 colors, I’m guessing not every possible weighted average combination of dither pattern using these 5 pallet color could result in the original pixel color so how can this be achieved? Also is a dither pattern is calculated for each pixel to achieve a dithered image?

Besides these general question about image dithering I’m still having difficulties understanding how this technique is helping us blend between cascade splits, where as far as actually implementing it in code, I’ve seen an example where it uses the space coordinates of a fragment and calculate a dither (Not sure what it’s calculating actually because it doesn’t return a matrix it returns a float :confused:):

float GetDither2(ivec2 p)
{
	float d = 0.0;
	
	if((p.x & 1) != (p.y & 1))
		d += 2.0;
	if((p.y & 1) == 1)
		d += 1.0;
	
	d *= 0.25;
	
	return d;
}

float GetDither4(ivec2 p)
{
	float d = GetDither2(p);
	d = d * 0.25 + GetDither2(p >> 1);
	return d;
}

float threshold = GetDither4(ivec2(gl_FragCoord.xy));

if(factor <= threshold)
{
	// sample current cascade
}
else
{
	// sample next cascade
}

And then it samples either cascade map based on this returned float.
So my brain can’t translate what I learned that you can have a dither pattern to simulate large color pattern, into this example that uses a returned float as a threshold factor and compares it to some blend factor just to sample from either shadow map. So it made me more confused.

Would appreciate a good explanation of this :pray:

Ok I see correlation between the algorithm I was provided with to the wikipedia article about ordered dithering, which as far as I understand is the preferred dithering algorithm because according to the article:

Additionally, because the location of the dithering patterns always stays the same relative to the display frame, it is less prone to jitter than error-diffusion methods, making it suitable for animations.

Now I see the code tries to get this threshold value for a given space coordinate although it seems to me it got it a bit wrong because the following calculation of threshold is a follows:
Mpre(i,j) = (Mint(i,j)+1) / n^2

And it needs to set: float d = 1.0 instead of float d = 0.0 if Im not mistaken.
Secondly, I’m not sure how left shifting the ivec2 space coordinate (I’m not even sure what’s the behavior of bitwise shift on vector in glsl…) but I assumes it just component bitwise operation, and I tried plug-in (head calculating) for a given space coordinate (2,1) (according to my assumptions about the bitwise operation) and got different threshold result for what should be the threshold value of this position in a 4x4 Bayer matrix.

So I’m skeptic about how well this code implements the ordered dithering algorithm.

Secondly I’m still not sure how this threshold value has anything to do with choosing between shadow map 1 or 2, and not just reducing color pallet of a given pixel, this logic hasn’t settled in my mind yet as I do not understand the use of dithering threshold value for a given space coordinate to choose the right map to sample from.

Lastly won’t choosing space coordinate will cause jitters? Given fragment in world position (x,y,z) who’s shadowed. Given this fragment space coordinate for a given frame are (i,j). If the camera moves won’t this fragment space coordinate bound to change making the dither threshold calculated for this fragment change with each movement causing jittering of the dither pattern?

Probably the simplest technique is ordered dithering.

The key point is to ensure that the error (the difference between what colour you would like the pixel to be and the colour you actually use) is evenly distributed around zero.

5 colours, or 5 shades? For RGB, you need at least 8 colours, the 8 corners of the colour cube: black, red, green, blue, yellow, magenta, cyan, white. Dithering can only interpolate; it can’t produce any colour outside the convex hull of the available colours.

It’s the same concept as stipple alpha (where rather than interpolating between the background and foreground colours, you either plot the foreground pixel or not with a probability proportional to alpha). So rather than calculating shadow/light for both maps then blending the results according to some weighting, it chooses one or other of the maps with probability proportional to the weighting.

The idea is to avoid a hard edge between maps, transitioning from one map to the next gradually.

Hey thanks for clearing something. I’ve posted new comment here with more questions I had about this if you could take a look and help with :pray: