# Difficulty Implementing Lloyd's Algorithm in WebGL

I’m trying to implment Lloyd’s Algorithm as described here: https://www.mattkeeter.com/projects/swingline/ in WebGL.

I implemented the Voronoi diagram, but am having issues calculating the centroids of the Voronoi regions. Essentially, I’m using two fragment shaders to sum over all the pixels in an image and calculate the centroid of pixels of a certain color, but my issue is that the calculated centroid is slightly off and I can’t figure out why.

[ATTACH=CONFIG]1436[/ATTACH]
Here is the difference in centroids.

[ATTACH=CONFIG]1437[/ATTACH]
Convergence on GPU

[ATTACH=CONFIG]1438[/ATTACH]
Correct convergence on CPU algorithm.

``````
precision highp float;

uniform sampler2D imageSampler;
uniform sampler2D voronoiSampler;
uniform vec2 windowDimensions;

const float max_samples = 50000.0;

void main(void) {
// Index of Voronoi cell being searched for
int thisIndex = int(gl_FragCoord.x);
gl_FragColor = vec4(0.0, 0.0, 0.0, 0.0);

for(float x = 0.0; x < max_samples ; x++){
if(x >= windowDimensions.x){
break;
}
vec2 textureCoord = vec2((x + 0.5) / windowDimensions.x, gl_FragCoord.y / windowDimensions.y);
vec4 voronoiTexel = texture2D(voronoiSampler, textureCoord);

int currentVoronoiIndex = int(255.0 * (voronoiTexel.x + (voronoiTexel.y * 256.0) + (voronoiTexel.z * 65536.0)));

if(currentVoronoiIndex == thisIndex){
vec4 imageTexel = texture2D(imageSampler, textureCoord);
float weight = 1.0 - 0.299* imageTexel.x - 0.587 * imageTexel.y - 0.114 * imageTexel.z;
weight = 0.01 + weight * 0.99; // give minum weight to avoid divide by zero
weight = 1.0; // For debugging, if we set weight to 1.0 it should spread out evenly

gl_FragColor.x += (x) * weight;
gl_FragColor.y += (gl_FragCoord.y - 0.5) * weight;
gl_FragColor.z += weight;
gl_FragColor.w += 1.0;
}
}
}

``````

Fragment shader for intermediate texture (this is rendered into a floating point texture).

``````
precision highp float;
uniform sampler2D intermediateSampler;
uniform vec2 windowDimensions;
uniform float voronoiUpscaleConstant;

const float max_height = 100000.0;

void main(void) {
float weight = 0.0;
float count = 0.0;

/* Sum over rows*/
float ix = 0.0;
float iy = 0.0;

for(float y = 0.0; y < max_height; y++){
if(y >= windowDimensions.y){
break;
}

vec2 texCoord = vec2(gl_FragCoord.x/windowDimensions.x, (y + 0.5) / windowDimensions.y);
vec4 imageTexel = texture2D(intermediateSampler, texCoord );

ix += imageTexel.x;
iy += imageTexel.y;
weight += imageTexel.z;
count += imageTexel.w;
}
ix /= weight;
ix /= float(voronoiUpscaleConstant);
iy /= weight;
iy /= float(voronoiUpscaleConstant);
weight /= count;

/* First vector, first 8 bits of  */
gl_FragColor = vec4(
mod(ix, 256.0) / 255.0,
mod(iy, 256.0) / 255.0,
mod(ix * 256.0, 256.0) /255.0,
mod(iy * 256.0, 256.0) /255.0
);
}

``````

Fragment that outputs to be read by CPU.

``````
const pixels = new Uint8Array(this.samples * 4);
const imgd = this.gl.readPixels(0, 0, this.samples, 1, this.gl.RGBA, this.gl.UNSIGNED_BYTE, pixels);
for(let i = 0; i < this.samples; i++){
const point = this.points[i];
const extraBitsX = pixels[i*4+2];
const extraBitsY = pixels[i*4+3];
const centroidX = pixels[i*4] + (extraBitsX/256) ;
const centroidY = pixels[i*4+1] + (extraBitsY/256) ;
this.points[i] = {x: centroidX, y: centroidY, weight: 100 };
}

``````

Code for updating location of centroids.

I thought the issue was due to pixel precision and small step size, so I bit-packed the output of the fragment shader (x,y) with 8 bits each for x and y’s decimals, but it still didn’t work properly. Any help would be highly appreciated since I’ve been stuck debugging this for a while.

I’m doing this as an open source project so the full code is available here: https://github.com/minimumcut/Flecks/blob/c00f23330f17d4068f2beae9647d2d98b2bc1d77/src/index.js

I figured out this for myself, I wasn’t clearing the depth buffer for the Voronoi diagram after each iteration