In order to fully appreciate the depth buffer, we need to first kernelize our sample colonels. Once these colonels have been occluded, we can buffer the transformed occlusionspace coordinates and achieve fully dynamic occlusion through out kernelrotationbasis (KRB) matrix.
We can then multiply our little friend the depth colonel by the KRB matrix to transform from occlusion space back to normalized kernel space.
From here, it is easy to reconstruct world space position without the steps taken. We simply have to buffer our rekernelized sample depth multiplied by our kernelspace occlusion factor (This is the inverse of the original process used to derive the occlusion value).
Here is some GLSL code which explains the process in more detail:
///////////////////////////////////////////////////////////////////////////////////////////////
uniform mat3 invKernelMatrix;
uniform vec3 linearNormalBias;
uniform mat3 KernelizedNormalMatrix;
uniform mat3 kernelMatrix;
uniform vec3 depthSampleKernel[16];
float reconstructOccludedPosition(vec3 screenSpaceCoordinates){
// Move from screen space into normalized kernel space
vec3 nks = invKernelMatrix * screenPos + linearNormalBias;
// Move from NKS to occlusion space
vec3 occlusionSpace = kernelizedNormalMatrix * nks;
// Buffer the matrix kernel. and then relinearize
buffer3 bmk = linearize( buffer( kernelMatrix ));
// finally, occlude the depth sample kernel against the buffered kernel matrix and multiply by occlusionSpace coordinate
float occlusion = occlude( depthSampleKernel, bmk ) * occlusionSpace;
return occlusion;
}
void main(){
// Calculate depth
float depth = texture2D( depthBuffer, v_vTexcoord ).r;
// Reconstruct occluded position
float occlusion = reconstructOccludedPosition( vec3( v_vTexcoord*2.0 1.0, 1.0));
// Calculate world space position (By performing a normalizing sample on our newly created buffer containing the constructed occlusion positions in occlusion space.
// As you know, if we multiply our occlusion factor by the depth, we can transform the coordinates back into normalized kernel space.
// We then rekernelize our kernel to get back to our original world space position per sample.
// buffer3 then takes every sample position within the kernel (we currently have 16 samples) and normalizes the result.
vec3 world_pos = normalize( buffer3( rekernelize(depth * occlusion), 1.0 );
// Finally we can use this information to perform the range check to avoid the haloing effect you experience in SSAO implementations.
if( world_pos.z < depth ){
// sample position is occluded by something, therefore return occlusion factor
gl_FragColor = vec3( occlusion, 1.0 );
} else {
discard; // discard the fragments if the test
}
}
//////////////////////////////////////////////////////////////
So once again, the process of reconstructing world space is as follows:

We take our screen space coordinate (texture coordinate of a fullscreen quad multiplied by 2, subtract 1 to fit into range 1 : 1).

Using the screen space coordinate, we calculate our coordinate in normalized kernel space (NKS) (This allows for a much more efficient method of sampling by performing a parkin’s swizzle transformation with a 4.5x performance gain over conventional sampling methods.)

We need to then move from NKS to occlusion space, this can easily be done by multiplying our NKS coordinate by the kernelized normal matrix.

Once in occlusion space (OCS), we need a sample buffer which we can apply to our NKS position. It is a simple operation to generate the sample buffer. This is achieved by first buffering the kernel matrix, this provides a basis for alternating a samples position between its previous coordinate (NKS) and its new coordinate (OCS). By doing this, we can build occlusion values.
The buffer function simply generates a form of datastructure in which each mat3 stored in the buffer3 is bilaterally linked with all other mat3s. This allows us to make full use of hardware optimisation, a process often never used for SSAO.

We then linearize the buffered kernel matrix, this removes the “travelling noise” problem often seen in ssao implementations when the camera moves with lowresolution ssao buffers.

Now that we have our buffered kernel matrix and our occlusion space position, we can calculate the occlusion value by using the occlude(…) function and passing in our depth sample kernel and our buffered kernel matrix. This newly generated result is then multiplied by our occlusion space coordinate. This gives us an occlusion value at that point.
At this stage, we should have an approximate ssao effect, however you will notice that certain areas cast occlusion onto occluders which are far away. We also suffer from a new problem called inverse occlusion, in which convex surface begin to darken along crease lines. This is to be expected, as our occlusion process only takes into account interactions between points on geometry. The erroneous values are caused by our continuous inverse transformations.
The good news is that there is an easy way we can fix this. As you clocked on, we need world space position for this. We can construct the world position using the following process (I.E working back through our original steps, except this time as the value was buffered on the input, this will return the specific occlusion point we want in world space.)
 normalize( buffer3( rekernelize(depth * occlusion), 1.0 );
–: As you can see from this code, we simply rekernelize the product of depth and occlusion factor, then buffer the result, and normalize.
Using this world space position, we can perform a simple depth test to ensure that only points occluded by geometry closer to the camera get occluded.
Hope this helps!