I am rendering 2D using vertex arrays on an Android (OpenGL ES 1.3)

Some data is in the form of lines with logically zero width, but I want them to show as 1 pixel wide. I set them with a width of 1/65536, which is well below pixel size (and is the smallest dimension you can do in Android in FIXED mode).

Experimentation shows that if the lines are diagonal, they render right (from my standpoint). However, if the lines are horizontal, they simply do not show up. This implies a rendering mechanism that is non-obvious (to me).

So here’s the question: how does rendering of sub-pixel sized features work? Is it library or even system dependent?

Is there a way I can use these ultra-thin lines and always get them to draw, or will I have to use a thickness dependent on the scale transformation to get them to show all points but be 1 pixel wide.

In a related matter… I have triangle strips where, at some scales, the triangles are well below pixel size. In some cases, most of the triangles are empty (transparent, alpha 0, if that makes sense). It would be a computational burden to regenerate with larger triangles (essentially, doing most of the work of rasterizing the data myself).

What is the algorithm that determines the color of the pixel in these cases?

Tks in advance.