Can anyone confirm that the diagram below is an accurate representation of the sequence when disabling a port? I created this diagram based on section of the spec assuming an out-of-context implementation.



I think the bEnabled field of the port definition structure should be changed in the context of the original call to disable the port. In this way, there is no race condition if the client should query the port immediately after sending the original disable command.

Otherwise that looks plausible.

I did wonder about that… Setting PortDef.bEnabled in the callers context sort of goes against the grain of queueing the commands to be actioned by a separate task. Certainly, this isn’t clear fom the spec.

To avoid the race condition, in a system without tunneling, the Client can wait for all buffers to be returned to the supplier (the Client in this case). However I don’t know if that will work with tunneling - not something I have looked into at the moment.