This has been a mystery to me for quite some time:
http://basic4gl.wikispaces.com/2D+Drawing+in+OpenGL
I understand why one should specify pixel coordinates at pixel centers, the confusion that can happen otherwise. But why would I want to render filled primitives at pixel edges? Why would the GL not consider painting the wrong pixel in this case as well? If a vertex sits at a fragment edge, there are a minimum of 2 candidate fragments to be painted.
Also, does the magical 0.375 constant have anything to do with rounding errors?
AFAIK, the best thing would be to simply use float coordinates, unless rounding errors can be an issue (because of the modelview matrix). Certainly, if the matrix had a translation component I could imagine these arising.
Also:
http://msdn.microsoft.com/en-us/library/ms537007%28VS.85%29.aspx
The parameters width and height are the dimensions of the viewport. Given this projection matrix, place polygon vertices and pixel image positions at integer coordinates to rasterize predictably. For example, glRecti(0, 0, 1, 1) reliably fills the lower-left pixel of the viewport, and glRasterPos2i(0, 0) reliably positions an unzoomed image at the lower-left pixel of the viewport. However, point vertices, line vertices, and bitmap positions should be placed at half-integer locations. For example, a line drawn from (x (1) , 0.5) to (x (2) , 0.5) will be reliably rendered along the bottom row of pixels in the viewport, and a point drawn at (0.5, 0.5) will reliably fill the same pixel as glRecti(0, 0, 1, 1).
But why is there no ambiguity when rasterizing the rect? Any how to decide where to put the vertices of 2D triangles?