I get some weird artefacts, when I try to create bump-mapping in screen space. What I do: I compute the screenspace-derivates of my texture coordinates and the compute the difference between two texels, like:
vec2 dx = dFdx(texCoord);
float du = texture2D(tex, texCoord) - texture2D(tex, texCoord + dx);

However, du suffers from quite heavy artefacts I cannot explain. These artefacts consist of single-pixelwidth lines, sometimes creating a moirelike pattern.
Sadly I can´t switch to tangent-space bumpmapping, since the application doesn´t provide any tangent-information.

vec2 dx = dFdx(texCoord);
This returns the derivative in x using local differencing for the input argument texCoord.uv. What about dFdy?

float du = texture2D(tex, texCoord) - texture2D(tex, texCoord + dx);
texture2D returns a vec4. You need to pick a single component to assign this to a float.

If you’re not using mipmaps, the derivative can be much bigger than one texel for minification and then you’re deep in aliasing trouble.

Yeah, did all that, I was just to lazy to write all this down.
However, I finally figured it out. I need to compute the derivates on a texel-basis, so I need to have the size of my textures to choose the correct pixel for the derivate computation. However, I don´t have the slightest idea, how I could do this for procedural textures.