If you want to blur it, render in the first pass to depth only (much faster, because no fragment shader is required) Then use two render tagets (with 2 16bit floats). In the first blur pass read the depth texture and calculate d and d² from the Z value, blur only in x direction. In the blur second pass render to the second rendertarget and blur in y direction.
Remember that blurring don’t help against undersampling artifacts, maybe you should generate mipmaps of the VSM too.
Rendering to the same texture won’t make sense in that case, because not all pixels can be be processed at the same time. If that would work, it won’t be artifact free because many input texels would be results from other already blurred texels.
According to the FBO spec, it probably would do a check if you attached the texture to the FBO and bound to a texture unit at the same time. Then give you an error. You could try building the FBO then binding the texture to a texture unit, and try it anyway. It would be nice if the driver just allowed this undefined behavior to go ahead.
You would think that it should work other than an API limitation especially if you masked out writes to the RG channels when writing the BA.
Reading from a texture that is bound as the current render target results in undefined behavior. You can do it with different mipmaps of the same texture, but not with the same mip level.
I’ve heard this does work on some hardware, but it should be avoided.
It can work, sometimes. But it is far from guaranteed; not even the IHVs will say that it will work or not. If it does work on one piece of hardware, it may not work on another. And the factors that allow it to work (possibly even something like shader length) are nebulous and unknown.