Shadow bugs based on camera orientation

For a project I am working I am trying to produce raytraced shadows where I cast a ray out from an object into a light direction to see if its in shadow. However I am producing strange bugs where the orientation of the camera seems to change the shadows.


in this image you can see where the shadows are being produced however in this next image if I look in a different direction.

the shadow on the model has now changed. At first I thought this would have been an issue with how I reconstruct the world space from depth buffer however I couldn’t find any issues there.

RayPayload DispatchShadowRay(in ivec2 Coordinate, in vec3 LightDirection)
{
     RayPayload Payload;
     Payload.Valid = true;
     Payload.InLight = true;

    vec2 NormalizedCoord = (vec2(Coordinate) + vec2(0.5)) / imageSize(ColorBuffer);
    float DepthValue = texture(DepthBuffer, NormalizedCoord).r;

    if(DepthValue >= 1.0)
        return Payload;
        
    vec4 ViewSpace = InverseProjection * vec4(NormalizedCoord * 2.0 - 1.0, 
                                              DepthValue * 2.0 - 1.0, 1.0);

    ViewSpace /= ViewSpace.w;
    vec4 WorldSpace = InverseView * ViewSpace;

    Payload.Origin = WorldSpace.xyz;

    Payload.Direction = normalize(-LightDirection);
    Payload.Origin += Payload.Direction * 0.001;


    TraceRay(Payload);

    return Payload;
}

I have been trying to use Nsight to debug this however I haven’t found it very useful.
My main questions are:

  • Are there any obvious bugs/mistakes that I may have missed or not checked?

  • How would I go about continuing to debug this problem, are there any other tools which may
    provide more insight? I have tried renderdoc but it seems to just crash so I haven’t been able
    to use that.

I am using the fully scaled image here to render and the results are interesting if i orient the camera just right you can see the artifacts of the colours (I am just setting the colour to its position for testing if its in shadow)


i am not sure what causes this either, it seems that in the fully scaled version changes on orientation change shadow very slightly and usually just end up looking like a visual artifact.

and case 2

it seems that when downscaling performing calculations then upscaling since the inaccurate fragment takes up a larger portion of the screen when upscaling it seems like a massive difference.
Could these results be caused by poor floating point accuracy or something else?

the problem was Payload.Origin += Payload.Direction * 0.0001; from what I understand the offset was simply to small and because when the camera moves the data in the framebuffer moves essentially allowing for different floating point inaccuracies which is why this was occurring and hence self shadowing. the solution was to just make this offset larger to 0.2, I am not 100% sure if my explanation is correct but it seemed to fix the problem now.