OpenVG 1.1 problem

I’m am still looking over the spec, but…

In Section 6 “Rendering Quality and Antialiasing”:
What the heck are you thinking!?!?

On first glance, the requirements laid out in 3b require the use of multi-sampling. This, of course, produces different behaviour than what OpenVG 1.0 defined as proper (which I suspect is why you ammended it), and goes against the substantial freedom in the 1.0 spec on how to implement anti-alaising.

The big problem though is that as the spec is written, multi-sampling will not be sufficient to comply with the requirements and correct behaviour has still not been defined!

Consider for a moment the following scenario:

Grey background (color: 0x808080FF). 2 blue squares right next to each other (one on the left, one on the right) - anti-aliasing on. One big red rectangle covering the bottom half of both of the quares. The boundry between the each of the 2 squares covers 1/2 pixel. Draw order:

  • Left square (src_ovr blending)
  • Rectangle (additive blending)
  • Right square (src blending)

If drawn under VG 1.0/1.0.1, the boundry pixel is :

  • 50% is covered blue (0x0000FFFF), 50% remains white - colow: 0x4040C0FF
  • red channel is saturated: 0xFF40C0FF
  • 0x0000FFFF is blended onto 0xFF40C0FF producing roughly 0x8020E0FF

i.e. we get the little seem that this spec means to eliminate…

In multisampling,
we’d end up with half the pixel being 0x4040C0FFcolor, and the other half being 0x0000FFFF color. The displayed color would thus be: 0x2020E0FF

In VG 1.1, two things can happen, none of which make any sense:

  • left square drawn (100% contirbution). Color is 0x0000FFFF
  • rect blended: Color is 0xFF00FFFF
  • right rect no contribution.


  • left square drawn (0% contribution). Color is thus still white
  • rect is blended: Color is now 0xFF0000FF
  • right square drawn (100% contribution). Color: 0x0000FFFF

So, depending on which order which way the spec means, the pixel is either blue or magenta, and in all cases a different color than VG 1.0 (0x8020E0FF), and that of using multi-sampling (0x2020E0FF).

In short:

My recommendation is leave well enough alone and do not ammend that particular section of the spec and leave it the way it was. This way not only attempts to force multi-sample buffers on embedded applications (unsuccessfully due to the small differences), but also a makes a hole in the spec.

Ammendment #1:
Also consider if the lines are not exactly the same (say the low order bit is different). The behaviour according to the new spec falls back to what OpenVG 1.0.1 did, while a multisample approach would still produce almost the same result as if the lines were the same… so many problems with this…

Simple example (2 blue abutting blocks which each cover 1/2 of the boundry pixel):
Under OpenVG 1.1 - The boundry pixel would STILL have the partially transparent seem since the geometry was not exact.
Under multi-sampling - there would be no seem as it’s close enough to fully covered.

Ammendment #2 - Section 16.1.2:
“It is anticipated that a wide variety of antialiasing approaches will be used in the
Didn’t this spec just enforce multi-sampling? How is another method allowed? Or was section 6 only meant to apply to surfaces with multi-sampling?

Ammendment #3:
I see this was done so as to support Adobe… sigh
So why not just support the Adobe way when multisampling is present and leave anti-alaising alone when it isn’t? We’re talking about embedded devices here and what your asking for is pretty much impossible without multisampling as the various geometries that need to be re-combined could be megabytes of data away from each other. It would require every single command to be buffered until swapbuffers is called, then do a search for this scenario, to handle this correctly (infact even the multisample approach needs to do this in the case covered by Ammendment #1).

I think you’re misinterpreting this section. The constraints listed only mention estimating coverage as either 0 or 1, therefore they can only apply to non-antialiased rendering.

Multisampling is mentioned separately in section 2.9

Nothing would make me happier than being mistaken - I hope you’re right.
If this constraint applies only when the rendering mode is set to VG_RENDERING_QUALITY_NONANTIALIASED, then great. No problems then and I’m sorry for panicing.

But if it applies to when the rendering mode is anything else, this introduces a huge problem which can be summed up as follows:
You don’t know the correct way to render the first path to be drawn, until you know of all paths that need to be drawn.
This requirement is ludicrous - especially for an embedded system. It is something multi-sampling can cure (almost anyways - see my first post), but without it, there is no reasonable way this can be implemented (and implementing multisampling in something like a software rasterizer is an immediate 4 fold minimum performance hit).

I’ll keep looking at the spec and see if I can find anything more on this subject.