Pixel based displacement method

Hey there,

I am wondering what would be the best way to achieve pixel displacement in OpenGL in terms of performance. What I mean by pixel displacement is a way to “move” pixels around based on their parameters (color, luminosity, etc).
For example I have a texture in memory and I want to process it in a way, that X axis remains the same, but I “move” the pixels along the Y axis based on RED channel value.

What I do now is I render texture_width * texture_height points with GL_POINT, where the vertex position is the 2D position from the original image, then in Vertex Shader I read the texture pixels based on the vertex position and change the vertex position based on the criteria (mentioned RED channel).

Is there a way to do it more efficiently? Doing it in the Fragment Shader would probably be a better idea but is it possible at all?
Perhaps Compute Shader?

Thanks in advance for all the suggestions.

If I understand correctly, this type of operation is usually done by rendering to an offscreen texture, then reading that texture from a pixel shader on a fullscreen quad, changing the UV coordinate being sampled as appropriate to offset where the pixels end up. This is essentially a post processing operation that is commonly used in chroma aberration or refraction shaders.

Unfortunately this will not work.
I can write to the texture the offset “where” the pixel should be rendered, not what pixel should I display at this (x,y). So the final fragment shader for any given (x, y) might need to sample various (x, y) pixels from the source. And the number is variable (or zero).
The main problem is that it’s many-to-one mapping - multiple pixels might end up at the same destination (x,y) position.

At this point I think there’s no simple fragment shader solution for this.

Ah, so you don’t have a way of knowing which pixels should map to your pixel being rendered. Don’t you have a way to build this inverse mapping? What’s the practical application in this case? Also, what do you mean by Red channel value, is it a vertex color input unrelated to the pixel color that you’re talking about?

Anyway, if that’s indeed a limitation you can’t work around, nothing immediately comes to mind.

Well ok, actually you can sample your undisplaced texture (with additional parameters like your red channel in another buffer of the same size if needed) and write to another output image using the imageStore function imageStore - OpenGL 4 Reference Pages which lets you write to another arbitrary texel at a specified coord.

Yes, that could actually work. If this OpenGL version is supported. What I would ideally do is to read the pixel value from the destination first and accumulate new pixels. I see there’s imageAtomicAdd - wonder what’s the performance of this approach. Will try to test it tomorrow.

One use case for this is rendering waveforms or vector scopes. You calculate a pixel parameter and place it in the plot under given (x, y) where parameter might be luminosity, saturation, hue, etc.

Alright, well you can try the imageStore approach certainly, but I will say that your described use case is a perfect example of my point of needing to reverse your point of view, to reframe it from the typical way it would be done on a CPU into something more suited to the massively parallel GPU.

If you want to render a waveform, the simplest thing is not to try rendering pixels at the right height, but rather to have a full screen pixel shader (or the size of your desired display), and on each pixel calculate your pixel’s distance from the waveform height for that X position. If the distance is less than a pixel or two, you return the waveform color, otherwise you’ll return the background color. This should be more efficient in most situations, allows you to easily have a line width or antialiasing, and is very easy to implement without anything out of the ordinary to do. I would definitely recommend trying this approach first.

Probably faster than vast numbers of points. Although you can’t replicate most blending modes using atomic image operations, only those which are order-independent (which basically means those which don’t use destination alpha).

Use GL_LINES.

By “vector scope”, I assume he means X-Y plot. In which case, the trace may self-intersect resulting in a given pixel being close to multiple parts of the trace. Aside from that, calculating the distance to an arbitrary path (i.e. <x(t),y(t)>, not y=f(x)) is expensive. Even for a conventional scope (where the X axis is the timebase), if the horizontal frequency is higher than the frame rate, you need to deal with multiple traces on each frame.

Well, it can still work for the vector scope depending on the number of segments and the performance needs, I’ve done this. It’s basically evaluating a signed distance function and mapping it to colors and could allow for nice graphical effect visualizations if that’s the goal. Same for the waveform use case.

That said, using GL_LINES is definitely also a good option to consider for these cases and way cheaper.

OK, so just to clear things up, here are CPU versions I need to do:

Waveform:

for(auto y=0; y<img_height; ++y) {
    for(auto x=0; x<img_width; ++x) {
        const auto color = src_img.get_rgb(x, y);
        const auto luminosity = get_lum(color);
        const auto dst_y = img_height * luminosity / 255;
        const auto dst_color = dst_img.get_rgb(x, dst_y);
        dst_img.set_rgb(x, dst_y, dst_color + factor * color);
    }
}

Vectorscope:

for(auto y=0; y<img_height; ++y) {
    for(auto x=0; x<img_width; ++x) {
        const auto color = src_img.get_rgb(x, y);
        const auto yuv = get_yuv(color);
        const auto dst_xy = vec2(img_width * yuv.y, img_height * yuv.z);
        const auto dst_color = dst_img.get_rgb(dst_xy.x, dst_xy.y);
        dst_img.set_rgb(dst_x, dst_y, dst_color + factor * color);
    }
}

Here’s a “naive” waveform implementation (different waveform type, but the same idea I am after) on shadertoy: Shader - Shadertoy BETA

Here’s an example of a vector scope result: http://www.d2vision.com/blog/img/vectorscope.jpg

@julienbarnoin I am still not sure if it would be possible using a signed distance function (or maybe I am not seeing something).

I see. For the vector scope, it’s definitely possible, but for that amount of points wouldn’t be very practical unless there’s some optimizations you can consider like if your yuv movement is always radial.
At any rate if this is the look and the amount of points you’re going for, your implementation using GL_POINT and a vertex shader sounds fine.

An SDF pixel shader could lead to interesting visuals too but would be practical only with a lot less points, so I wouldn’t go for that given your use case.

Are you hitting a performance issue with the vertex shader version and looking to optimize it, or was it good enough already?

It’s working OK if the source image resolution is not too high (FullHD).
But was just wondering if there’s any direction I could take it for better performance.

Thanks for your feedback anyway!

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.