This is with reference to the famous article “A pixel is not a little square” by Alvy Ray Smith. Well i tried reading it a couple of times. But i got lost somewhere after the second page.
Can anyone please explain what he exactly means by the article? Or perhaps any links with more explanation would be helpful.
PS: here is the link to the article:
So then the elements of a resolved AA surface are pixels .
And the non-box filtering/integration of them are nVidia and AMD’s special AA modes, that use samples from nearby pixels.
He basically means:
what you see in that square pixel isn’t the true colour that should be there, but just a single sample- which could be totally wrong. To get the best approximation of colour in that square/rectangle area, you need many samples that hopefully aren’t in a regular square grid.
I think this is before the time AA was introduced in gpus, not to mention jittered sample positions and later the motion to integrate by also using samples from nearby pixels.
So, first it’s a plea to understand the cells of images for what they are, and then a plea to start doing smarter things during rendering (which we nowadays do).
which we nowadays do
Still glaringly missing from current hardware texture interpolation are more advanced kernels. A simple bicubic interpolation still has to be done by hand …
Thanks for the replies. So, he just means to say, the color at a pixel could be the result of samples from many neighbouring pixels, and not just one pixel?
From many underlying samples, not neighbouring pixels. The neighbouring pixels stuff I mentioned means “specific nearby samples from neighbouring pixels, in addition to all samples from the current pixel”.