Perspective shader? (Idk what to call what I'm looking for)

I’ve done glsl fragment exploring in 2d with python-sfml, by aligning a rectangle’s position and texture coordinates to the view’s corners, and drawing it with the shader applied, looking at the world thru per-pixel glasses.
I’d like to try something similar, in 3d, with OpenGL’s python wrapper. I’d like to visualize the boundary between two centroids in taxicab metric, by scanning from pixel out to find the point where the centroids are nearly equidistant. I aim to calculate the mesh of this boundary, and overlay it to the shader to see if it’s correct.
I’m guessing a frag shader would be much faster and clearer than plotting random points with python-visual.
Any keywords, code snippets, urls, examples, etc. appreciated.

You want to use a fragment shader to draw a Voronoi diagram?

Typically, each invocation of the fragment shader would take the fragment’s position (either from an interpolated variable or derived from gl_FragCoord) then iterate over “nearby” centroids (i.e all centroids close enough that the fragment might lie within their cell), calculating the taxicab distance for each one, and recording which one is closest. The iteration can terminate when all remaining centroids are known to be farther away than the closest one found so far.

In order to minimize the number of iterations (in particular, avoiding the need to iterate over all centroids), you’d need to use a spatial index for the centroids, e.g. a 2D array of vectors containing the centroids within a given grid square. One implementation is to store the centroid coordinates in a buffer texture or uniform array, and use a 2D RG16UI or RG32UI texture where the two components for each texel are the index of the first centroid and the number of centroids.

For a given fragment, you’d first look up the start/count pair for the grid square containing the fragment’s position and iterate over the corresponding centroids, then those for the surrounding grid squares, spiralling outward until the remaining grid squares are too far away to be viable.

Having said that, if you can determine a reasonable upper bound for the maximum “radius” of a centroid’s cell, it would probably be more efficient to just render a bounding polygon for each centroid, have the fragment shader write out the taxicab distance to that centroid into the depth buffer via gl_FragDepth, and rely upon depth testing to select the nearest value.

Not trying to draw a voronoi diagram. Just the perpendicular plane between two centroids.
I want to hit-test along the length of each pixel’s ray into the world, from origin to farClip, zeroing in on where the border between centroids is, and then use the distance for a simple black & white depth map to visualize.
The framework i need is much the same as if you were to render a sphere by providing the shader with position and radius, in world coordinates.
The goal is to then be able to do other draw functions, after the render, which align to what’s been rendered. I want to use this to proof calculated taxi-metric’s perpendicular plane mesh against the rendered version, with the final objective of creating a mesh for each centroid’s region.
The shader isn’t intended to render any mesh other than the minimal quad neccisary for viewing the shader.
Roughly, the fragment code would be like so, in addition to whatever yields rayOrigin and rayDirection:

float mDist(vec3 p1, vec3 p2) {return abs(p1.x-p2.x)+abs(p1.y-p2.y)+abs(p1.z-p2.z);}

varying vec3 P[2]; // or equivalent
int Region(vec3 p) {
	float dMin=32767.0;
	int iMin=-1;
	for (int i=0;i<Pn;i++) {
		d= mDist(P[i],p);
		if (d<dMin) {
			dMin= d;
			iMin= i;
		}
	}
	return iMin;
}

varying float farClip;
float DepthTest(vec3 rayOrigin, vec3 rayDirection) {
	float l, lMin=0.0, lMax=farClip;
	int c1= Region(rayOrigin);
	int c2= Region(rayOrigin+rayDirection*lMax);
	if (c1==c2) {
		p= ve
		return 1./0.; // ray does not cross centroid-centroid perpendicular plane
	}
	// else, find where it does cross:
	while (lMax-lMin>1e-04) {
		l=(lMin+lMax)/2.0;
		c= Region(rayOrigin+rayDirection*l);
		if (c==c1)	lMin= l;
		else		lMax= l;
	}
	return l;
}

void main() {
	vec3 rayOrigin= __________ , rayDirection= __________ ;
	vec3 depth= DepthTest(rayOrigin, rayDirection);
	gl_FragColor= vec4(vec3((farClip-depth)/farClip),1.0); // keeping it simple for now
}

As for the moving a camera in 3d space, and rendering, I can probably piece stuff together over a couple hours or days, but I’ll definately not complain if someone would like to whip up some code for this task which is likely simple for most of you :slight_smile:

Also, porting c-OpenGL to python-OpenGL is generally verbatim, isn’t it? Guess I’ll find out

What I was looking to make was just the perpendicular plane between two centroids, and I’ve done that now. I’ve found a ray tracing fragment shader I could re-purpose, and now have a working renderer of a perpendicular plane, here (there’s a jpeg in there).
Still, I got a problem with how to draw lines in the same coordsys, over the render. The shader makes it’s own camera rotation from uniform-supplied mouse yaw/pitch, while the actual world view is unused. I’d like to reverse that, so, back to the original raycasting problem.
So, there’s no OpenGL example framework for this kind of world coordsys raycasting, by chance? (#easybuttonPlz)
I’d like to more quickly move beyond making this, to using this to make the 3d taxi mesh triangulation routines.

Went with frustum corner interpolation. Here’s the draw+trace overlay scheme: PyOpenGLRayOverlay
looks like this image
Yay! Moving on now