I do have some idea of what actually could be a fragment but I can’t seem to find a source which would confirm if my thoughts are right (neither wrong, i think).
What I do think on them is:
Fragments would be the pixel-sized division of each polygon, after rasterization. In such case, you could have many fragments at the same pixel-sized point.
Following that; after rasterization if, for example, depth test was enabled, it would test all fragments’s depth field and only just throw the one nearest to the viewer (lower Z?) to the real color buffer.
As for (standard) blending, it would scale the colors according to the alpha number.
However i came to think that, if so, a buffer to contain fragments would have to be somewhat dynamically sized?
Plus, definitions such as Wikipedia’s say a fragment is all the data needed to generate a pixel. But, to my vision, i could have many fragments to generate a pixel. Actually, the pixel color would be the combination of many fragments (and not one!) and using user defined settings to manage them.
Could someone clear this confuse story a bit?