Antialiasing Approaches

Hi, another newbie question, I’m afraid.

I’ve created a GLSL video effect by converting some code I wrote originally in Core Image Kernel Slang (an Apple-specific 2D-only subset of GLSL). The effect is essentially a kind of pixellation effect, but operates using polar, rather than cartesian coordinates. There are some screenshots of the effect in action in this post on my blog, Machines Don’t Care.

Here’s one of the screenshots, to save you looking at the site:

This is actually the CIKernel version- the GLSL conversion actually looks slightly different, as texture samples that overshoot the image dimensions are wrapped, rather than returning black (if anyone can tell me if this can be changed without adding hugely to the complexity of the code, let me know).

As you can see, aliasing is an issue. The effect is designed to operate in realtime on video material, potentially up to HD dimensions, so it has to be as fast and efficient as possible. I wonder if anyone could recommend the best approach to smoothing things out, or tell me if such a thing would be practicable, or even possible. Is it a matter of trying to smooth jaggies in a second pass, or can the code be modified to anti-alias as the pixellation effect is created, I wonder. Any advice gratefully accepted.

Here is the Fragment Shader code I used:


// Control inputs
uniform float Angle; // range 2pi / 100000.0 to 1.0 (rounded down), exponential
uniform float AngleMin; // range -3.2 to 3.2
uniform float AngleWidth; // range 0.0 to 6.4
uniform float Radius; // range -10000.0 to 1.0
uniform float RadiusMin; // range 0.0 to 2.0
uniform float RadiusWidth; // range 0.0 to 2.0
uniform vec2 Center; // range: -1.0 to 3.0 for both axes
uniform bool HideBg;

// Texture input
uniform sampler2D Texture;

void main()
{
    bool bg = false;
    // Normalised texture coords
    vec2 texCoord = gl_TexCoord[0].xy;
    // Shift origin to texture centre
    vec2 normCoord;
    normCoord.x = 2.0 * texCoord.x - Center.x;
    normCoord.y = 2.0 * texCoord.y - Center.y;
    // Convert Cartesian to Polar coords
    float r = length(normCoord);
    float theta = atan(normCoord.y, normCoord.x);

    // THE ACTUAL EFFECT
    if (r > RadiusMin && r < (RadiusMin + RadiusWidth)) {
        r = ceil(r / Radius) * Radius;
    } else {
        r = r;
        bg = true;
    }
    if (theta > AngleMin && theta < (AngleMin + AngleWidth)) {
        theta = floor(theta / Angle) * Angle;
    } else {
        theta = theta;
        bg = true;
    }

    // Convert Polar back to Cartesian coords
    normCoord.x = r * cos(theta);
    normCoord.y = r * sin(theta);
    // Shift origin back to bottom-left
    texCoord.x = normCoord.x / 2.0 + (Center.x / 2.0);
    texCoord.y = normCoord.y / 2.0 + (Center.y / 2.0);

    // Output
    if(bg == true && HideBg == true) {
        gl_FragColor = vec4(0.0,0.0,0.0,0.0);
    } else {
        gl_FragColor = texture2D(Texture, texCoord);
    }
}

Obviously, there’s quite a lot of extra code in there associated with controls to tweak the effect, but the effect itself is quite simple. Any advice on how to further optimise things would be very gratefully taken on board too.

Cheers!

alx

Sure, you can control this using the TEXTURE_WRAP_S and TEXTURE_WRAP_T texture parameters. Either set these to GL_REPEAT if you want to wrap or use a texture border and set them to GL_CLAMP_TO_BORDER.

As you can see, aliasing is an issue. The effect is designed to operate in realtime on video material, potentially up to HD dimensions, so it has to be as fast and efficient as possible. I wonder if anyone could recommend the best approach to smoothing things out, or tell me if such a thing would be practicable, or even possible. Is it a matter of trying to smooth jaggies in a second pass, or can the code be modified to anti-alias as the pixellation effect is created, I wonder.

IMHO there are 3 options:

  1. Use multisample anti-aliasing when creating your window/FBO.
  2. Do a post processing smoothing pass
  3. Do it at shader level, eg.
    you know the distance from the border of a pixellation from
    frac(r / Radius)
    and
    frac(theta / Angle)
    You know the neighbouring pixels will have either r+Radius,r-Radius, theta+Angle,…
    You can then use ‘smoothstep’ and/or ‘mix’ with neighbouring colors to eliminate the jaggies.

Hope this help.

N.

Hi -NiCo-

thanks very much for getting back to me.

Excuse my ignorance, but is this something you can do purely with GLSL? I ask because Quartz Composer ( Wikipedia entry, if you’re interested ), the development environment I’m using doesn’t allow direct access to OpenGL commands, but does allow (obviously) GLSL to be used.

Don’t think I can do this, without rewriting the whole thing in OpenGL.

I’d rather avoid doing it this way if possible, as my understanding is that this would essentially mean applying some sort of blur effect, with the performance hit that this would imply. I may be wrong though…

Ah, I see. I’ve not come across the frac function before. You couldn’t possibly do me a big favour, and knock up a quick example of it’s use in this situation, could you, just for one of the dimensions?

Thanks very much once again,

Cheers,

alx

I don’t think the wrapping modes can be set in GLSL because AFAIK it does not support setting texture states within the shaders. It’s possible in Cg, but since you have to use GLSL this is not an option.

I never used Quartz composer so I’m not sure what the possibilities are but I came across this page where they mention:

msa_ChannelShift Info_circle.png An RGBA channel shift effect with a few different modes: 0: simple shift, 1: repeat edge pixels, 2: wrap around edge pixel pixels, 3: mirror edge pixels

Like I said, I’m not able to open the file (I guess this is a project file of some kind) but maybe you can have a look at it.

In any case, it’s fairly simple to implement wrapping in the shaders. The frac command returns the fractional value of a floating point value. So if the texture coordinates are in the range 0.0-1.0 you can implement wrapping by using the fractional value of the texture coordinates instead of the actual value when sampling the texture.

You mean Quartz doesn’t support the creation of multisample windows?

I’d rather avoid doing it this way if possible, as my understanding is that this would essentially mean applying some sort of blur effect, with the performance hit that this would imply. I may be wrong though…[/QUOTE]

Well, there are a number of ways to do blurring depending on the performance hit you’re willing to get. eg. you could generate a mipmap and then do a linear interpolation between the first two levels to create some blurring, etc.

As I mentioned before the frac command gives you the fractional value, so basically, when using the frac command the way it’s used in my previous post, it will reflect the position within a pixellation block. There you could determine if it’s close enough to a border so you can sample neighbouring colors for blurring…

By the way, can’t you do the pixellation in regular cartesian coordinate space in a first pass and then use linear interpolation in polar coordinates in the second pass?

Cheers,
N.

Hi again -NiCo-

It’s funny you should mention that page. It’s a page created by the company who make the application I’m developing video effects for, VDMX5, currently in beta. I know the guy who makes the ‘msa’ plugins, and the ‘tb’ ones are mine.

They’re .qtz files, the native Mac-only file-format of Quartz Composer. Vidvox have added support for qtz files to be used as realtime effects, hence my interest in the whole area. It’s really cool, as anyone with Quartz Composer (which comes with the Developer Tools installer that ships with all new Macs) can create their own effects using QCs graphical development approach, without necessarily having to get their hands dirty with any actual coding. Since I enjoy messing around with code, however, I’ve spent the last few months investigating the possibilities offered by the applications scriptable modules.

Aha! I seee. I wasn’t sure what frac did, but now I understand…
Thinking about it, though, surely this would have the effect of wrapping the texture when the sampling coords exceed the bounds of the texture (which is what already happens, in fact). I could maybe add a 1-pixel transparent border to the whole image in another pass, perhaps, then just clamp the sampling coords to 0.0, 1.0, or simply test for the coords exceeding 1, and return a transparent pixel if they do.
I’m used to working in non-normalised pixel dimensions, from working with Core Image Kernel code.

Not directly, though I think it does allow the creation of Billboard and Sprite patches that can use multisampled textures. Being new to this, I’m afraid I’m a bit vague on the terminology.

Would this work with a moving-image as the texture though? I could potentially create several versions of the video stream at different sizes, but I’m not sure the application provides a way of combining them into a single multitexture.
I’ve actually been considering looking into this, however, as I’d like to be able to, for example, create lots of copies of the input image, and arrange them in 3D space, and it would be good if further-away instances of the image used a lower-res texture, to improve performance.

I will give this a go.

I may have got the wrong end of the stick here, but wouldn’t that distort the image? The nice thing about this effect is you can go from the original image to a ‘polar-pixellated’ version without the geometry of the underlying image being distorted. Since these are VJ effects, I’m always looking at adding controls so the effect an be tweaked in interesting ways. It would be very cool to be able to morph from a ‘cartesian pixellate’ to a ‘polar pixellate’, actually. I image that could be done relatively easily if polar and cartesian dimensions were all normalised, then you’d have to work out the sampling position in both spaces, and mix between them… thinking out loud there, apologies.

Maybe I’ve misinterpreted what you meant, however.

Cheers!

alx

PS
I’m going to upload a Flash-format movie clip of the effect in action, just so you know how it looks at the moment, and can see the controls in action.

Ow, I guess I misunderstood your problem then. I thought it was the other way around. Maybe you can have a look at the bottom of page 236 in this document.

Would this work with a moving-image as the texture though? I could potentially create several versions of the video stream at different sizes, but I’m not sure the application provides a way of combining them into a single multitexture.
I’ve actually been considering looking into this, however, as I’d like to be able to, for example, create lots of copies of the input image, and arrange them in 3D space, and it would be good if further-away instances of the image used a lower-res texture, to improve performance.
[/QUOTE]

I’m not familiar with Quartz or any other Apple APIs actually so I don’t know how much of the OpenGL API is exposed. In pure OpenGL it’s possible to create a mimapped version of a texture with a single function call. You’ll have to call this each time the texture changes or set the GL_GENERATE_MIPMAP_SGIS parameter for the texture to ‘True’.

I may have got the wrong end of the stick here, but wouldn’t that distort the image? The nice thing about this effect is you can go from the original image to a ‘polar-pixellated’ version without the geometry of the underlying image being distorted. Since these are VJ effects, I’m always looking at adding controls so the effect an be tweaked in interesting ways. It would be very cool to be able to morph from a ‘cartesian pixellate’ to a ‘polar pixellate’, actually. I image that could be done relatively easily if polar and cartesian dimensions were all normalised, then you’d have to work out the sampling position in both spaces, and mix between them… thinking out loud there, apologies.
[/QUOTE]

I probably misinterpreted what the effect was supposed to do. It’s hard to see without the original texture of the reference. It also depends on how you pass the texture coordinates (fullscreen quad or something more exotic…). But you’re probably right, it would be more difficult to do because it has non-linear spacing between the samples.

Maybe you could write the result into a texture and then draw that texture to screen with linear filtering enabled where the texture coorinates are offset by half a pixel in both directions. This is similar to a 2x2 box filtering that will reduce the jaggies?

Cheers,
N.

I’ve uploaded a clip of the effect in action here, if you’re interested.

alx

Happy New Year!

Ah, OK. Again, it’s not something that can be done in Quartz Composer, I think. I’m definitely thinking that ultimately, I’m going to be creating effects as QC plugins, using straight OpenGL commands, then I’ll have direct access to all this. For the moment, however, this seems to work:

// Output
if (bg == true && HideBg == true) {
		gl_FragColor = vec4(0.0,0.0,0.0,0.0);
	} else {
		if (normCoord.x >= 1.0 || normCoord.x <= -1.0 || normCoord.y >= 1.0 || normCoord.y <= -1.0) {
		gl_FragColor = vec4(0.0,0.0,0.0,0.0);
	} else {
		gl_FragColor = texture2D(Texture, texCoord);
	}
}

I’m not sure if this is theoretically more efficient than creating a transparent border on the original image before it is input to the GLSL module, then simply clamping texCoord to -1.0 to 1.0. I read somewhere that it is sometimes better to carry out separate operations in discrete shaders, rather than trying to combine lots of different operations in the same shader.

It’s hard to tell, actually, as you have to try to figure out where the function might be hidden in a module, or a setting for a module. I don’t think the creation of mipmapped texture is supported though. I’m going to look into this.

I’ve tried rendering the GLSL patch to an image, then adding a preset Box Blur object to it. It doesn’t really seem to do the do, so I think that some way of blending pixels based in the frac of r and theta, as you suggested has to be the way to go. I’ll have to sit down and try and work out the logic of it, though I understand the basic principle. I’ll let you know how I get on.

Thanks again for all your assistance and advice!

alx

Hi again!

I should really post this in the beginning OpenGL forum, since it’s baby stuff, but it’s GLSL-specific, so I’m going to post it here again.

I’ve been experimenting with smoothstep and mix on a simple, cartesian-coordinate-based pixellation effect. I just wanted to see if I could successfully blur the edges of this before tackling the radial version.
I’ve had mixed success. The problem I’m having is, I can blend tile horizontally, OR vertically, but I just can’t work out how to blend in both directions at the same time, without the edges of the tiles appearing as hard lines again.

Here’s the Fragment shader:

// CONTROL INPUTS
// Width and Height of tile. Range: 1.0 divided by pixel width/height of texture (rounded) > 1.0
uniform vec2 Tile;
// Horixontal/Vertical blur distance. Range: 0.0 > 0.5
uniform vec2 Blur;
// X-Y direction of blend. Range 0.0 (horizontal only) to 1.0 (vertical only)
uniform float DirectionMix;

// TEXTURE INPUT
uniform sampler2D Texture;

void main()
{
	// Current texture position (normalised)
	vec2 texCoord = gl_TexCoord[0].xy;
	// Normalised position of current pixel in its tile
	vec2 tilePos = vec2(fract(texCoord.x / Tile.x), fract(texCoord.y / Tile.y));
	
	// Bottom-left of current cell
	vec2 p0 = vec2(floor(texCoord.x / Tile.x) * Tile.x, floor(texCoord.y / Tile.y) * Tile.y);
	// Bottom-left of tile to Left of current tile
	vec2 p1 = vec2(clamp(p0.x - Tile.x, 0.0, 1.0), p0.y);
	// Bottom-left of tile Below current tile
	vec2 p2 = vec2(p0.x, clamp(p0.y - Tile.y, 0.0, 1.0));
	
	// Mix-levels, horizontal and vertical
	vec2 mixXY;
	// Mix-level for Left of tile
	mixXY.x = smoothstep(0.0, Blur.x, tilePos.x);
	// Mix-level for Bottom half of tile
	vec4 mixPixX = mix(texture2D(Texture, p1), texture2D(Texture, p0), mixXY.x);
	
	// Mix-level for Bottom of tile
	mixXY.y = smoothstep(0.0, Blur.y, tilePos.y);
	vec4 mixPixY = mix(texture2D(Texture, p2), texture2D(Texture, p0), mixXY.y);

	// Output
	gl_FragColor = mix(mixPixX, mixPixY, DirectionMix);
} 

I’ve tried various methods of blending the horizontally and vertically mix, but all have failed to smooth things out. I’m sure I’m missing something hideously obvious. Any ideas?

I posted some screenshots on my blog here.

Cheers once again,

alx

Give this one a go…

// CONTROL INPUTS
// Width and Height of tile. Range: 1.0 divided by pixel width/height of texture (rounded) > 1.0
uniform vec2 Tile;
// Horixontal/Vertical blur distance. Range: 0.0 > 0.5
uniform vec2 Blur;

// TEXTURE INPUT
uniform sampler2D Texture;

void main()
{
	// Current texture position (normalised)
	vec2 texCoord = gl_TexCoord[0].xy;
	// Normalised position of current pixel in its tile
	vec2 tilePos = vec2(fract(texCoord.x / Tile.x), fract(texCoord.y / Tile.y));
	
	// Bottom-left of current cell
	vec2 p0 = vec2(floor(texCoord.x / Tile.x) * Tile.x, floor(texCoord.y / Tile.y) * Tile.y);
	// Bottom-left of tile to Left of current tile
	vec2 p1 = vec2(clamp(p0.x - Tile.x, 0.0, 1.0), p0.y);
	// Bottom-left of tile Below current tile
	vec2 p2 = vec2(p0.x, clamp(p0.y - Tile.y, 0.0, 1.0));
	// Bottom-left of tile Below and Left of current tile
	vec2 p3 = vec2(p1.x, p2.y);

	vec2 mixfactors;
	mixfactors.x =  min(tilePos.x/Blur.x,1.0f);
	mixfactors.y =  min(tilePos.y/Blur.y,1.0f);

	vec4 tmp1 = mix(texture2D(Texture, p1), texture2D(Texture, p0), mixfactors.x);
	vec4 tmp2 = mix(texture2D(Texture, p3), texture2D(Texture, p2), mixfactors.x);
	gl_FragColor = mix(tmp2, tmp1, mixfactors.y);
} 

Cheers,
N.

That’s brilliant!!

The blurring works perfectly in both dimensions now. I spent ages wrestling with the logic for this one last night, and wasn’t really getting anywhere. I’m still not entirely sure how or why your solution works, but it certainly seems to do the do, which is brilliant! I’d convinced myself it was never going to work, after hitting the same error for the nth time.

The only slight weirdy is that there seems to be some sort of wrapping going on: the Top and Left rows seem to be a mix of what should be there, and pixels from the other side of the image. This seems to be the case, even without blurring…
I’m guessing that pixels at the edges of the image are blended with pixels from the other side automatically. I don’t think that’s something I can change without diving into the guts of OpenGL, and Quartz Composer doesn’t provide the means of doing that, so I solved the problem by offsetting all the texture sampling positions by half the Width/Height of each tile, so the colour values are actually sampled from the centre of each tile. It’s probably not all that elegant, but it seems to work.

Thanks once again for all your help and encouragement on this one.

alx

/*
With thanks to -NiCo- from the OpenGL.org forum.
*/

// CONTROL INPUTS
// Width and Height of tile. Range: 1.0 divided by pixel width/height of texture (rounded) > 1.0
uniform vec2 Tile;
// Horizontal/Vertical blur distance. Range: 0.0 > 0.5
uniform vec2 Blur;

// TEXTURE INPUT
uniform sampler2D Texture;

void main()
{
	// Current texture position (normalised)
	vec2 texCoord = gl_TexCoord[0].xy;
	// Normalised position of current pixel in its tile
	vec2 tilePos = vec2(fract(texCoord.x / Tile.x), fract(texCoord.y / Tile.y));
	
	// Bottom-left of current tile
	vec2 p0 = vec2(floor(texCoord.x / Tile.x) * Tile.x, floor(texCoord.y / Tile.y) * Tile.y);
	// Bottom-left of tile to Left of current tile
	vec2 p1 = vec2(clamp(p0.x - Tile.x, 0.0, 1.0), p0.y);
	// Bottom-left of tile Below current tile
	vec2 p2 = vec2(p0.x, clamp(p0.y - Tile.y, 0.0, 1.0));
	// Bottom-left of tile Below and Left of current tile
	vec2 p3 = vec2(p1.x, p2.y);

	vec2 mixFactors;
	mixFactors.x =  min(tilePos.x / Blur.x, 1.0);
	mixFactors.y =  min(tilePos.y / Blur.y, 1.0);

	vec4 tmp1 = mix(texture2D(Texture, p1+(Tile/2.0)), texture2D(Texture, p0+(Tile/2.0)), mixFactors.x);
	vec4 tmp2 = mix(texture2D(Texture, p3 +(Tile/2.0)), texture2D(Texture, p2+(Tile/2.0)), mixFactors.x);
	gl_FragColor = mix(tmp2, tmp1, mixFactors.y);
}

Thought you might be interested to see the Core Image Kernel slang version of the code:

kernel vec4 tb_blurryPixellate(sampler Image, vec2 Tile, vec2 Blur)
{
	// Current pixel pos. in pixels
	vec2 xy = samplerCoord(Image);
	// Dimensions of input image
	vec2 dims = samplerSize(Image);
	
	// Normalised pix coords
	vec2 texCoord = xy / dims;

	// Normalised position of current pixel in its tile
	vec2 tilePos = vec2(fract(texCoord.x / Tile.x), fract(texCoord.y / Tile.y));
	
	// Bottom-left of current tile
	vec2 p0 = vec2(floor(texCoord.x / Tile.x) * Tile.x, floor(texCoord.y / Tile.y) * Tile.y);
	// Bottom-left of tile to Left of current tile
	vec2 p1 = vec2(clamp(p0.x - Tile.x, 0.0, 1.0), p0.y);
	// Bottom-left of tile Below current tile
	vec2 p2 = vec2(p0.x, clamp(p0.y - Tile.y, 0.0, 1.0));
	// Bottom-left of tile Below and Left of current tile
	vec2 p3 = vec2(p1.x, p2.y);

	vec2 mixFactors;
	mixFactors.x =  min(tilePos.x / Blur.x, 1.0);
	mixFactors.y =  min(tilePos.y / Blur.y, 1.0);
	
	// Each point (p0-p2) must be multiplied by dims to denormalise
	vec4 tmp1 = mix(sample(Image, (p1 + (Tile / 2.0)) * dims), sample(Image, (p0 + (Tile / 2.0)) * dims), mixFactors.x);
	vec4 tmp2 = mix(sample(Image, (p3 + (Tile / 2.0)) * dims), sample(Image, (p2 + (Tile / 2.0)) * dims), mixFactors.x);
	
	return mix(tmp2, tmp1, mixFactors.y);
}

This is the GLSL subset language used to for coding custom Core Image Filters in Quartz Composer on the MacOS X platform. It has some annoying limitations, like no Noise functions, no Mat variables, and, most annoyingly of all, no

if (blah) {
   do something;
}

conditionals, which can make for slightly tortuous logic, and very long lines, as you have to use ternary-style conditionals instead.

alx

Hi again,

I’m glad you like it. The reason it works now is that it basically performs bilinear interpolation. Here’s some pseudo-code

if ((tilePos.x<Blur.x) && (tilePos.y<Blur.y))
{
//perform bilinear interpolation between p0,p1,p2 and p3 just as bilinear interpolation would do for 4 neighboring pixels of a texture
}
else if (tilePos.x<Blur.x)
{
//perform linear interpolation between p0 and p1
}
else if (tilePos.y<Blur.y)
{
//perform linear interpolation between p0 and p2
}
else
{
//return p0
}

If you’re worried about performance, here is another tip: move the Tile offset to the part where you assign p0.
p1, p2 and p3 depend on p0 so they’re adjusted automatically.

vec2 p0 = vec2(floor(texCoord.x / Tile.x) * Tile.x, floor(texCoord.y / Tile.y) * Tile.y) + 0.5f*Tile;

Hiya,

thanks for explaining what’s going on there.

I actually tried offsetting the original points coords before sampling already, actually. I found that I get wrapping at the left and top if I do it that way, whereas if I did the offset later on, this didn’t happen. I’m not sure if clamping would help, or if I need to clamp the values of p1 - p3 later on.

alx

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.