# Creating a shader effect that emulates ink spreading

Hi everyone. I’m fairly new to glsl. I have used it in the past to alter shaders that come stock with Processing and Max MSP, but I haven’t really started a shader from scratch, to accomplish a specific goal. I’ve been working through the Orange Book, picking apart some stuff at shadertoy, etc. What I want to do now is create an effect (2d) that emulates ink spreading across paper.

When wet ink is laid on paper, the color begins to spread outwards at uneven rates, creating an asymmetrical shape with some sort of capillary effect. I have been trying to break down the process behind this to start using glsl for it, and here is what I have roughly thought up:

Since the ink spreads at uneven rates, a random value could be applied to each fragment, representing the “mass” of the ink. From what I understand, this might be appropriate to do in the vertex shader, and then pass on to the fragment shader. Once that is calculated, the ink will begin to spread, which will mean each drop will spread outwards, with the “mass” decreasing with each degree of spread.

Does anyone have any advice for taking this on? Are there similar shaders that I could pick apart to understand this better? Are there any issues I will come across with the method I’ve described? From what I can think of, a have a gap in my understanding of how the map of “mass” for each pixel will be used efficiently in the fragment shader.

Thank you all!

This seems like a cellular automaton, which can be implemented using either a fragment shader and render-to-texture or a compute shader with image stores.

Essentially, for each pair of adjacent cells, ink flows from the cell with more ink to the cell with less ink at a rate dependent upon the amounts of ink in both cells and the distance between them (which will be one cell for orthogonally-adjacent cells, sqrt(2) cells for diagonally-adjacent cells).

The total change in ink mass for each cell is the amount which flows in from adjacent cells minus the amount which flows out to adjacent cells. The total amount at each step is the rate multiplied by the time step.

In order to conserve the total mass of ink, the increases must match the decreases, i.e. the rate of flow from cell A to cell B must always be the negative of the rate of flow from B to A.

A simple model would have the rate of flow proportional to the difference between two cells; in which case, the only parameter would be the time constant T (i.e. a difference in mass of D results in a flow rate of mass D per time T). More complex models might have random variations in flow rate between specific cells to model the “grain” of the paper, or a model where the relationship between difference and flow rate is non-linear.

A fragment shader might look like:

``````
uniform sampler2D source;
uniform float T;

layout(location=0) out float result;

const float diag = 1.414213562373095;

float F(x0,x1,dist)
{
return (x1-x0)/(T*dist);
}

void main()
{
ivec2 ix = ivec2(gl_FragCoord.xy);
float x11 = texelFetch(source, ix, 0).r;
float x01 = texelFetchOffset(source, ix, 0, ivec2(-1,0)).r;
float x21 = texelFetchOffset(source, ix, 0, ivec2(1,0)).r;
float x10 = texelFetchOffset(source, ix, 0, ivec2(0,-1)).r;
float x12 = texelFetchOffset(source, ix, 0, ivec2(0,1)).r;
float x00 = texelFetchOffset(source, ix, 0, ivec2(-1,-1)).r;
float x02 = texelFetchOffset(source, ix, 0, ivec2(-1,1)).r;
float x20 = texelFetchOffset(source, ix, 0, ivec2(1,-1)).r;
float x22 = texelFetchOffset(source, ix, 0, ivec2(1,1)).r;
float d01 = F(x11,x01,1.0);
float d21 = F(x11,x21,1.0);
float d10 = F(x11,x10,1.0);
float d12 = F(x11,x12,1.0);
float d00 = F(x11,x00,diag);
float d02 = F(x11,x02,diag);
float d20 = F(x11,x20,diag);
float d22 = F(x11,x22,diag);
result = x11 + d01+d21+d10+d12 + d00+d02+d20+d22;
}

``````

One caveat: the results will be undefined if you sample beyond the edge of the texture, so you should either avoid rendering the boundary pixels, or replace texelFetchOffset() with e.g. textureOffset() (having first normalised the texture coordinates) and use the appropriate wrapping mode on the texture.

Hi, GClements -

This wasn’t my question/thread, but I thought I’d pipe up and say that was a good, thorough answer. You are definitely one of the more helpful individuals here.

I really appreciate the thorough answer, it is incredibly helpful (and now I get to read more about cellular automatons!). There are a few things I don’t quite understand, which is really just because of the limited scope of what I’ve seen in glsl. What is the function of:

``````layout(location=0) out float result;
``````

Everything I’ve done so far in glsl has involved gl_FragColor at the end of main(), and I’m not quite sure how to apply the result. Also, I am currently testing this out using Processing, but can certainly switch to something else if that adds unnecessary constraints.

It declares an output variable of type float, and associates it with colour attachment 0.

If you’re using GLSL 1.2 or earlier, you should write to gl_FragColor instead of using a user-defined output variable.

Since GLSL 1.3, the preferred approach (and the required approach when using the core profile) is to declare fragment shader outputs explicitly rather than using gl_FragColor or gl_FragData. This allows the use of types other than vec4 for outputs (this is particularly important for outputs which are 32-bit integers, as conversion to and from floating-point will introduce rounding errors).

If you’re using GLSL 1.3 or later with the compatibility profile, you can use either approach.

[QUOTE=GClements;1283241]It declares an output variable of type float, and associates it with colour attachment 0.
If you’re using GLSL 1.2 or earlier, you should write to gl_FragColor instead of using a user-defined output variable.
[/QUOTE]

I’ve been digging around for the last few days and I still can’t seem to get it to display. One thing is that I don’t know quite how that float applies to the output color. Is it to be treated as a value that multiplies the incoming fragment color? If so, I am only getting white, across the board. Part of this might even be related to some of what Processing is doing in it’s default vert shader, but I can’t seem to get passed it.

No, it’s the actual value written to the framebuffer.

So to adapt this for a color image, would I want to make these vec3?

Yes. But then the inputs (x00 etc) would also be vec3 rather than float.

If you wanted to accurately simulate the flow of ink, you might even use more than 4 components, i.e. multiple input textures and multiple colour attachments. In practice, the different constituent dyes flow at different rates, which can be used to separate them (a process known as chromatography).

Also, the textures should ideally use either floating-point or at least 16-bit normalised values. The concentration of ink where applied directly will typically be far higher than the concentrations arising from dispersion, even when the latter are clearly visible. Transmission of light varies exponentially with concentration. If you run the simulation with only 8 bits (256 levels), quantisation error will be significant, e.g. resulting in concentrations quickly being rounded to zero.

[QUOTE=GClements;1283287]Yes. But then the inputs (x00 etc) would also be vec3 rather than float.

If you wanted to accurately simulate the flow of ink, you might even use more than 4 components, i.e. multiple input textures and multiple colour attachments. In practice, the different constituent dyes flow at different rates, which can be used to separate them (a process known as chromatography).

Also, the textures should ideally use either floating-point or at least 16-bit normalised values. The concentration of ink where applied directly will typically be far higher than the concentrations arising from dispersion, even when the latter are clearly visible. Transmission of light varies exponentially with concentration. If you run the simulation with only 8 bits (256 levels), quantisation error will be significant, e.g. resulting in concentrations quickly being rounded to zero.[/QUOTE]

Thank you again GClements, I hope I haven’t been too much trouble. Is there any chance you have some insight on why this isn’t working? I made some tweaks but I feel like I didn’t change how it works, I’ve just been trying everything I can to get this to work, but it is only showing a solid color (gray).

``````#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

uniform sampler2D source;
varying vec4 vertTexCoord;

uniform float T;
const float diag = 1.414213562373095;

vec3 F(vec3 x0,vec3 x1,float dist){
return (x1 - x0)/(T * dist);
}

void main() {
vec2 ix  = vertTexCoord.st;

vec2 c11 = vertTexCoord.st + vec2( 0.0, 0.0);
vec2 c01 = vertTexCoord.st + vec2(-1.0, 0.0);
vec2 c21 = vertTexCoord.st + vec2( 1.0, 0.0);
vec2 c10 = vertTexCoord.st + vec2( 0.0,-1.0);
vec2 c12 = vertTexCoord.st + vec2( 0.0, 1.0);
vec2 c00 = vertTexCoord.st + vec2(-1.0,-1.0);
vec2 c02 = vertTexCoord.st + vec2(-1.0, 1.0);
vec2 c20 = vertTexCoord.st + vec2( 1.0,-1.0);
vec2 c22 = vertTexCoord.st + vec2( 1.0, 1.0);

vec3 x11 = texture2D(source, c11).rgb;
vec3 x01 = texture2D(source, c01).rgb;
vec3 x21 = texture2D(source, c21).rgb;
vec3 x10 = texture2D(source, c10).rgb;
vec3 x12 = texture2D(source, c12).rgb;
vec3 x00 = texture2D(source, c00).rgb;
vec3 x02 = texture2D(source, c02).rgb;
vec3 x20 = texture2D(source, c20).rgb;
vec3 x22 = texture2D(source, c22).rgb;

vec3 d01 = F(x11,x01,1.0);
vec3 d21 = F(x11,x21,1.0);
vec3 d10 = F(x11,x10,1.0);
vec3 d12 = F(x11,x12,1.0);
vec3 d00 = F(x11,x00,diag);
vec3 d02 = F(x11,x02,diag);
vec3 d20 = F(x11,x20,diag);
vec3 d22 = F(x11,x22,diag);

vec3 result = (x11 + d01+d21+d10+d12 + d00+d02+d20+d22);

vec3 col = texture2D(source, ix).rgb;
gl_FragColor = vec4(col*result,1.0);
}

``````

You need to use an offset of one texel. If you’re modifying normalised texture coordinates, you need to divide the offset by the size of the texture.

This should be

``````
gl_FragColor = vec4(result,1.0);

``````

Limited texture precision is likely to be an issue on ES. You’re not even guaranteed to get 8 bits per component.

I’d suggest getting it working with desktop OpenGL and floating-point textures first.

[QUOTE=GClements;1283303]You need to use an offset of one texel. If you’re modifying normalised texture coordinates, you need to divide the offset by the size of the texture.
[/QUOTE]

I am pretty sure I’m not using normalized texture coordinates, so this should be fine right? It is working better now though, I switched over to a c++ framework I am more comfortable with, which helped me get floating-point textures working. It is displaying now, and there are very visible effects over time, but the effect seems to be contained within the boundaries of the original shapes (it just doesn’t flow outwards). It does fade in a very interesting way though, but it seems to do so when time decreases, instead of increases (as T decreases from about 12.0 to 0.0 the effect is noticeable). Might the outward movement issue be related to what you said about the texel offset?

texture2D(), texture() etc take normalised texture coordinates (unless you’re using a sampler2DRect). texelFetch() (and variants) take non-normalised coordinates (i.e. a value of 1 corresponds to one texel). Anything with an “offset” suffix uses non-normalised values for the offset regardless of whether the base coordinates are normalised.