Quality difference between Perspective and Ortho?

Dear all,

I haven’t tried it, but have heard, that there is an output texture quality difference when displaying 2D graphics via Ortho view versus Perspective view.

The articles I have read in the past say, that 2D ortho view yields the most accurate/identitcal image of the source texture image. On the other hand, perspective view modifies(resizes and scales) the image to fit the distance of the object from the camera. The result is, that the final texture is blurry.

Despite the rumored quality disadvantages of perspective view, it looks to me, that using Perspective view would be the most practical, especially for easy implementaiton of parallax scrolling and easy camera movements.

My question is, whether it is really true, that perspective view really decreasses the texture quality. Alternatively, is it possible to use perspective view instead of ortho view, to utilize its advantages, without quality loss?

What comes to my mind is setting the right distance between the camera and texutred object(sprite) so that perspective view would not recalculate/adjust the texture on the target polygon?

This is what things like anisotropic filtering and mipmapped textures are for. It talks about this in the superbible 5th edition, and I think the 4th addition as well. Multisampling also helps a little but I believe that mipmapped textures are the best way to battle this problem.

I think ortho projection uses affine texturing, which means it doesn’t do the perspective divide per pixel when calculating which value in the texture map to look up. Affine texturing is a lot faster than regular, well at least in software it is.

otherwise i dont think theres anything in it

There’s nothing inherently “special” about perspective or ortho; all that they do is calculate a matrix to multiply vertex positions by. You can calculate the same matrix yourself in software and get the same result, and in all cases the very same multiplication of position by MVP is performed.

So let’s look at what’s happening. In an ortho projection objects onscreen don’t change size for different values of Z, in a perspective projection they do. To map (say) a 256x256 texture onto an object of any arbitrary size some kind of sampling and filtering must be done. This is equally true of an ortho projection as it is of a perspective projection.

So it’s not the projection type (which doesn’t really exist anyway) that matters, it’s the final size of the object when compared to the size of the texture that matters. If the final size is the same then you preserve the original image the best.

It just so happens that an orthographic matrix is going to give you the same size - provided your vertex positions and texcoords are chosen appropriately, and provided your glOrtho (or whatever) parameters are chosen appropriately. But it’s perfectly possible to set up an orthographic matrix, or to choose position and texcoord values, that don’t do this too. There’s no special voodoo at work here, it’s all just about parameters to functions and how they come out of a calculation.

Hi Bcthund,

Thanks for your response. I am just reading the OpenGL Superbible 5th edition, and that book si simply superb.

Hi Mhagain,

Thank you for your very nice explanation. As I was reading your text two more questions came to my mind, and I would like to kindly ask to consider for answering:

  1. Is there a practical and easy way to assure that the texture size and the target quad-polygon will be compatible in a way so original texture dimensions and resolution will be preserved after polygon mapping?

  2. Based on what you said, can camera movement as easily implemented as in ortho view as in perspective view? Say, if I wish to get 2D scrolling effect in ortho view?