Motion Blur

“Antialiasing” is the removal of aliasing artifacts by a variety of means.

Aliasing and antialiasing have always been something of a pet-peeve of mine. I consider an antialiasing algorithm to be an algorithm that, when taken to the limit of infinite “something” (whatever number or numbers limit the effectiveness of the technique), that the algorithm will correct remove (rather than covering up or hiding) all aliasing artifacts that the algorithm is attempting to remove.

Supersampling fits this. The limit as the number of samples goes to infinity of a supersampling algorithm is a still frame that has had all aliasing removed properly. Obviously, it doesn’t deal with temporal aliasing, but it isn’t supposed to.

No mere blur filter can do this. Signal processing tells us that, in order to be assured of a certain level of aliasing, you must take a certain number of signals (the Nqyuist limit, I believe it is called). Blur filters don’t add samples, so they cannot decrease the level of aliasing in the image.

You can employ a blur filter to make the aliasing look less bad. However, the noise that is added is not the accurate noise than an antialiasing method would add. This doesn’t actually solve the aliasing problem; it simply covers it up. It is the equivalent of putting garbage in a closet; sure, nobody can see it, but it’s still there and probably smells a bit

On the final point, the box filter over multiple frames IS wrong. Box filters are appropriate when combining intra-frame samples of presumably equal weight.

My mistake; I thought by “frame”, you meant sub-frames of a full frame time.

accumulating over 1/24th of a second is still not quite an accurate simulation of cinematic blur, because the shutter is not open the entire time. a better simulation would involve building motion blur over a smaller frame of time based on actual shutter speeds. even in these situations you’d want to compile enough frames together for the blur that you achieve a continuous blur rather than just a few discrete images. depending on your standards of quality, it’s unlikely that this could be done in real-time for an appreciable number of polygons.

this is a problem with every real-time “motion blur” effect i’ve seen… it looks trippy, but it doesn’t look “right”.

depending on your standards of quality, it’s unlikely that this could be done in real-time for an appreciable number of polygons.

It can be done on today’s hardware. You’d have to sacrifice virtually every other effect to do it, though, so it probably isn’t worth it yet. And, as I pointed out earlier, it doesn’t look very good even when done right unless you’re willing to do HDR.

And as a means of temporal antialiasing it might never be worth it in realtime graphics since it’s typically faster to render n times more frames than n times supersampling.

Of course this doesn’t include very fast moving, or vibrating (the worst case of temporal aliasing as far as I know) objects. We’d need a way to determine the number of samples per object, based on their speeds, to make it worthwile.

-Ilkka

phlake, saying 1/24th of a second was a simplification. My post was already long enough :slight_smile:

I think that ‘trippy’ effect of many motion blur demos is due to an exageration of the effect. It is a lot like the fact that people went nuts with colored lights when they first became doable, then they created overly shiny bumpy stuff when bump maps became possible.

It is like a new effect is not worth it unless it is overtly obvious when the reality is that each new step towards realism in graphics will be increasingly subtle. i.e., you know that the scene looks dramatically better, but you may not be able to put a finger on why.

The term ‘trippy’ comes from the drug culture, specifically from LSD use, which is known to cause a blurring effect called ‘trails’. Trails are likely a result of the LSD slowing down the response time of the cells in the retina, or of the brain processing visual information in such a way that it holds onto old sensory information longer.

Anyway, the trippy demos are really only wrong in that they either do not render enough FPS to create a smooth trail and/or they exaggerate the effect too much.

Cinematic motion blur (simulation of exposed film), as well as perceptual motion blur (simulation of what we see) are both valid.

Originally posted by Nakoruru:
Cinematic motion blur (simulation of exposed film), as well as perceptual motion blur (simulation of what we see) are both valid.

But you only really need the latter one when you want to exaggerate it (as in a game protagonist taking drugs). Otherwise your eyes will already do the job.

No, your eye do not do the job! Otherwise there would been no need at all to ever simulate motion blur, and blur on film would be a huge problem, not a huge help (24 FPS is rather low, without motion blur created by film, movies would look like slide shows). In real life, an object moves continuously. On a screen it moves discreetly.

Have there been any studies of how high the FPS has to be before your eyes can do the job? I cannot see the difference between any FPS above 75. But, maybe higher FPS is still needed before natural motion blur kicks in.

I should write a problem which displays a white dot on a black background at 640x480 so the monitor can be set to 120Hz or more and then display it goting at various speeds at various FPS.

The monitor and graphical FPS would probably have to be incredibly fast (maybe greater than 1000 FPS) before it would blur as much as a ball on a string being spun really fast. Until you get FPS and monitor refresh that fast, your brain will percieve it as a series of still pictures of balls that are flashing. Like if you spun a ball on a string in a strob light. Even at 200Hz/200FPS.

Originally posted by Nakoruru:
No, your eye do not do the job! Otherwise there would been no need at all to ever simulate motion blur, and blur on film would be a huge problem, not a huge help (24 FPS is rather low, without motion blur created by film, movies would look like slide shows). In real life, an object moves continuously. On a screen it moves discreetly.

Yes, that’s why you need to simulate motion blur, but not motion trail/ghosting.

Korval –

There are essentially two ways of accomplishing anti-aliasing. Your intuition gives you one approach: the over-sampling method. In this case, you sample a signal at a high frequency and use a reconstruction filter to downsample it to the target frequency. The OpenGL example would be multisampling, where the “higher frequency” is the super-rez you get from the multisample buffer and the reconstruction filter is a combination of the filter kernel, screen and eye response. Same with supersampling.

The other way is to guarantee that the signal you generate is band-limited to the target frequency. The OpenGL example is mip-mapping.

Sampling multiple frames (the supersampling approach) is the one you seem to advocate, but you need to recognize from a signal processing perspective “blurring” is also valid, and doesn’t necessarily entail a loss of information. The goal in blurring is to band-limit your temporal activity to eliminate the strobe effect when frame rate undersamples animation.

Granted, the simpler and more flexible approach is the oversampling approach, but that’s also quite brute force and likely out of the capabilities of all but the most current hardware. ATI’s Animusic demo, for example, does a good job of performing non-oversampled motion blur.

My guess is that a good method would use a compromise between the two: you have a small number of oversampled frames (2x/3x) and use the geometry blurring in between those frames. You then get a piecewise linear approximation of motion. Bonus points if each of your frames are jittered in space to give you spatial anti-aliasing at the same time.

-Won

Actually: here’s an interesting application of render-to-vertex-array or programmable tesselators.

Automatically generate blur geometry that samples along the path of motion given per-vertex position its various derivatives and a time scale.

Hah…this would probably spell trouble for shadowing and whatnot. At least we’ll be needing more vertices!

-Won