Shaft Lighting System Issues

Howdy all, this is an extension of an old thread, but my query has evolved enough that i thinks its best to just start a new one that can cover all the nescecary ground.

My goal is to create a system of dynamic 2D shaft lighting. Where based on a given texture and angle, a a shader is used to draw a texture buffer to a second texture containing detail of where to start/end light shafts in the r/g/b/a channels. This second texture is then drawn to a light surface using a second shader to specify where light shall be drawn.

Currently i have the two shaders made. The first, i believe, behaves as desired (first attachment: shd_lightShaftScan.txt) but the second shader does not (second attachment: shd_lightShaftDraw.txt).

Figure 1: The texture initially passed to the first shader, it is a blank square with the width and height of the diagonal cross section of the original occlusion map. the occlusion map is then drawn to it at a certain angle, in this case 300deg (approx. 5 radians). Black signifies where a shaft can start, white indicates where a shaft can end. grey is neutral.
[ATTACH=CONFIG]1683[/ATTACH]

Figure 2: The 1D shaft-scan texture. Alpha set to one to increase readability (so yes data for the second endpoint is missing) and dilated vertically for easy reading. r = startpoint1, g = endpoint1, b = startpoint2 and (usually) a = endpoint2.
[ATTACH=CONFIG]1684[/ATTACH]

Figure 3: The drawn light Texture.
[ATTACH=CONFIG]1685[/ATTACH]

As you can see the final light surface (3) is not what it should be. however, to me atleast, the 1D shaft Scan (2) looks about as it should.

Background information for the above case would be that the resolution of the shaft scan is 1024, the step resolution is also 1024 and the angle is 300deg (approx. 5 radians). This was done in opengl ES 2.0 using GMS2. I believe all data has been parsed from the environment to opengl as intended, though i will continue to test this.

Any help would be great, i can also give more information if needed

frag shader1: [ATTACH=CONFIG]1687[/ATTACH]
takes the initial texture and scans horizontally in a while statement. storing values in the rgba channels of a 1D texture. it recognizes black as a location to start, and white as a location to end.

frag shader2: [ATTACH=CONFIG]1686[/ATTACH]
after some coordinate math, pairs pixels in the destination texture to a specific pixel in the 1D texture. based on this and its distance, draws based on whether or not it lies in a shaft-approved region.

Have you tried reversing the process, i.e. converting the 1D texture back to a 2D texture to see if it agrees with what you’d expect?

Well i threw together this frag shader real quick:

varying vec2 v_vTexcoord;
varying vec4 v_vColour;

uniform float blurFactor;
uniform vec2 angle;
uniform vec2 reso;

void main()
{	
		//translate
	vec2 coordPrime = v_vTexcoord-0.5;
		//rotate
	coordPrime = vec2(coordPrime.x*angle.x-coordPrime.y*angle.y,coordPrime.x*angle.y+coordPrime.y*angle.x);
		//scale
	float square = sqrt(reso.x*reso.x+reso.y*reso.y);
	coordPrime = vec2(coordPrime.x/(square/reso.x),coordPrime.y/(square/reso.y));
		//translate back
	coordPrime += 0.5;
	
		//grab individual data (can be removed)
	float stepVal = coordPrime.y;
	float dist = coordPrime.x;
	
	vec4 shaftSample = texture2D(gm_BaseTexture, vec2(stepVal,0.0));
	
	if (abs(shaftSample.r - dist) <= 0.1)
	{
		gl_FragColor = vec4(vec3(0.0), 1.0);
	}
	else if (abs(shaftSample.g - dist) <= 0.1)
	{
		gl_FragColor = vec4(vec3(1.0), 1.0);
	}
	else if (abs(shaftSample.b - dist) <= 0.1)
	{
		gl_FragColor = vec4(vec3(0.0), 1.0);
	}
	else if (abs(shaftSample.a - dist) <= 0.1)
	{
		gl_FragColor = vec4(vec3(1.0), 1.0);
	}
	else
	{
		gl_FragColor = vec4(vec3(0.5), 1.0);
	}
}

and it gives this result:
[ATTACH=CONFIG]1689[/ATTACH]

Clearly not what it should be returning. It should be returning something roughly similar to the original input but un-rotated/zoomed.

I have to go to bed so i cant do it just yet, but ill try editting the shader more/making a new one with no coordinate manipulation tomorrow.

edit: forgot to revert the other shaders to the non alpha-locked version, the new image is the same as the one attatched but without the white bar on the right.

So alot was wrong with my method in the last post but ive fixed it up. now im using a new source and focussing on a angle = 0 scenario, just to narrow the debugging a bit for now.

[ATTACH=CONFIG]1690[/ATTACH][ATTACH=CONFIG]1693[/ATTACH][ATTACH=CONFIG]1691[/ATTACH][ATTACH=CONFIG]1692[/ATTACH]
1: input occlusion texture.
2: resultant scan, still dilated vertically, but with no alpha adjustment.
3: unscan shader result, the drawing of the data is a little “wider” than it actually is in the 1D map.
4: blend of 1 and 3 for comparison

From the looks of things, it seems rows where exactly 2 points of light/occlusion occur, the data is returned appropriately, but in cases of one, data isn’t recorded at all, and in cases of three, data is pushed to the far left (at the top and bottom sides of the large platform.).

edit: on tweaking the “eTester”/“sTester” variables and methods in shd_shaftScan, i can eliminate the 1-shaft case problem:
[ATTACH=CONFIG]1694[/ATTACH]