More comments to hakanardo:
> If I understand the concept of graph factories right, it does not allow for different graphs to be combined and optimized together? This is where I believe the real benefit of graph-level optimizations lies.
In OpenVX 1.0, you don’t have the possibility to combine graphs together.
> How about introducing a vxSubGraphNode that allows the user create custom nodes by encapsulating graphs into such nodes. They can then be used as any other nodes when creating higher level functionalities. From an implementation point of view, those subgrpah nodes would at the beginning of the verification step be “inlined” into a single non-hierarchical graph which now only contains standard nodes.
I think that this sub-graph concept is very interesting indeed. But if it looks simple conceptually, I think this is actually a highly complex subject. In effect, the graph can be modified after verification (some changes require a re-verification, some others not). For instance, a node can be removed by the application, or have some of its parameter changed. It’s not necessarily easy to keep the consistency in your ‘expanded’ graph. But in any case, I agree this is an interesting subject to investiguate further.
> The vxCornersGraphFactory example in “Framework: Graph Parameters” uses a dimof function which should be defined somewhere. Also, the first use of this function (inside a declaration) should probably be removed.
dimof is not standardized. The specification then needs to be updated, maybe by simply providing the code of dimof in the examples you mention.
> Is it the responsibility of the user to age the delay objects by calling vxAgeDelay? Is there any way to declare that a delay object should be aged after each call to vxProcessGraph? This does not fit well with the encapsulation ideas of the graph factories. I.e. you would probably want to hide Delay nodes within those subgraphs, but if that means the user of those subgraphs would need to call vxAgeDelay on each of them, the encapsulation is broken in an unpleasant way.
In OpenVX 1.0, a delay object is a ‘real’ object. It can be used in a graph, but its scope is not limited to the graph (like real image). This object can be used outside of the graph (for instance in an other graph). For this reason, it is the responsability of the user to age the delay.
Nevertheless, I agree that in many cases, the user will want the delay to be aged automatically after graph execution, so this is a subject to invertiguate further also.
> Would it be a good idea to use Node Callbacks that calls vxAgeDelay in those cases (there are some warnings about this being inefficient)?
I don’t think that NodeCallback is the good mechanism for what you want since, as a user, you don’t well control when the node callback is actually called. I don’t think also that there is a good way to do what you want in OpenVX 1.0. I would then recommend to age delays manually for the time being.
> To use the distinction between vx_scalar and vx_int32 to specify where changing a parameter will enforce a new verification feels quite hackish. Wouldn’t it be better to specify this more explicitly? I.e. add an attribute that tells which case it is and add a column to the argument-table in the specs with it’s value. This way this information would be available in the same ways as the parameter direction is. This would also allow for the specs to allow the implementation to decided whether a reverification is needed or not in cases where that is appropriate.
Using vx_int32 instead of vx_scalar is not only for preventing connecting 2 nodes with this parameter. It is also simpler for the user to create a node with a vx_int32 argument rather than a vx_scalar (no scalar object to create, initialize and release).I think your comment is relevant, things would certainly be more explicit with an additional parameter property.
> In many cases where pointers are passed to the vx functions, there is also a size parameter specifying how much data could be written. However this is not the case for i.e. vxCreateScalar vxAccessScalarValue and vxCommitScalarValue. Is there some logic to when the size is needed and when
Usualy, a vx_size parameter is given in addition of the pointer in vxQuery and vxSetAttribute functions. These functions are generic and need to work with attributes of very different sizes. Since the attribute enum name does not explicitely tells which size is expected in most of cases, the vx_size parameter acts as a sanity check to avoid memory corruptions.
For other functions like access/commit functions, there in effect no such vx_size ‘sanity check’ param. Since a scalar can object with different sizes, it may also be safer to have a vx_size parameter also. I think it’s a question of tradeoff to find between simplicity/safety/performance : more parameters usually means more work for the user and lower performance.