Is this legal? dFdx/dFdy/fwidth question.

Imagine this fragment shader:

#version 330

in vec2 uvCoord;

uniform sampler2D textureMap;

layout(location = 0) out vec4 fragment;

void main() {

    //Get a sample from a texture map
    fragment.rgb = texture(textureMap,uvCoord).rgb;
    //Get the gradient of the fragment value. Is this legal?
    fragment.rgb = vec3(1-length(fwidth(fragment.rgb)));
    fragment.a = 1;


Unless I’m missunderstanding things, dFdx/dy returns the derivative of a value for x and y respectively, in screen space. So using dFdx on a vec3, for example, would return another vec3 with the derivatives of for each vector component in the x direction on screen space (so, uh, right-left :P). If what I’m understand is correct, this checks the specified variable’s value of the neighboring fragments to calculate this derivative, so it shows the rate of change between the “current” fragment and the neighbor ones.

The gradient can be computed as G = sqrt(dFdxdFdx + dFdydFdy), which if done on a vec3 would result in the individual gradients for each component. I’m visualizing this as a vector in 3D space pointing to the direction of change. If the vec3 was a color, it would be in color space. I assume this is an acceptable visualization, is it?

By obtaining the length of this vector I can obtain the magnitude of the change, regardless of the direction.

So by performing the above shader instructions I’m basically asking for the maginute of change on the colors being written to the image, which would conceptually result (assuming I’m interpreting this correctly) in a filter-like effect where high frequencies are shown in black color and low frequencies in white. In other words, it creates a sketch-like effect. Now I realize this is not the ideal way of doing this effect, but it’s not what I wanted to discuss.

The question is: is this even legal? The value of the fragment is still being defined by the time the dFd* functions are called, so it must mean that it has to fetch the value being worked on at the moment in other fragment units. So I’m right now wondering if this wouldn’t bring a lot of problems as some fragments may not actually have neighboring fragments being processed at the time.

Indeed, it seems to work like this as changing the moment when a dFd* function is called will operate on the state of the value at the current time.

With my test machine I’ve been able to apply this to any value within a shader, not only inputs, but I cannot find any information regarding if this is legal or just undefined behaviour.

If anyone can spare an explanation of how the derivatives are obtained that would help too. Not the math behind it but the source of the data; is it coming from fragments isolated to the primitive the fragment pertains to or does it “leak” onto other fragments as well?

Yes, that’s legal. The only illegal uses of the derivative/fwidth functions are if you use them in non-fragment shaders or if you use them in Non-uniform control flow.

As for the performance, it’s fine. GPUs generally execute fragment shaders in 2x2 pixel quads. By which I mean the GPU’s compute unit is basically running the same opcodes on 4 different sets of registers. So the neighbor’s value is quite literally just another register, easily accessible. It’s pretty cheap.

The 2x2 pixel quad thing happens even if some of the fragments are outside of the triangle. It’ll still do interpolation, so it’s possible to interpolate values off of the triangle. That’s fine as the interpolation is linear, so you’ll get reasonable values. Those fragments that are off of the triangle will be generated for exactly these purposes (doing texture accesses requires doing the equivalent of the derivative functions), but all results of them are discarded.

Awesome, thanks for the explanation as usual!

Also, higher-order (i.e. nested) derivatives are undefined.