[Unity] how to get full tracking data from a controller

public InputActionProperty positionProperty;
public Vector3 Position { get; private set; } = Vector3.zero;
private void Update() {
    Position = positionProperty.action.ReadValue<Vector3>();
}

I am able to access position data from the controller like this, but Update only gives me at most 144 data points per second (capped by headset refresh rate).

Would love a function that just adds every position point to a list whenever the value changes.

My goal is to get all the data points and just draw a line with them to analyze the tracking system speed and accuracy.

I would be interested in this too, along with any guidance on research that has been done on VR controller accuracy. I’m interested in high speed motion tracking and have noticed a tendency for tracking to over compensate on the positioning of a controller if it is swung quickly. I see this both on Vive and Quest trackers so I am wondering if this is happening within OpenXR or built into the controller firmware. Ultimately since I know the constraints of the motion being tracked I’m wondering if i can autocorrect for the over compensation that I see.

I noticed the same thing. If controller is moved in arcing path (like a throw swing of the arm), it will start to predict a wider arc if the controller speed goes above 8-9 m/s.

It feels like it’s an automatic correction and the fix would be to get rawer, less filtered data.

I suspect it’s unlikely you’ll be able to access that kind of information from proprietary runtimes without working closely with the runtime vendor. There might be ways you could use the API better (such as choosing timestamps carefully, etc) but fundamentally, if you put a tracker designed for a hand on a bat, you’re likely to see some mismatch in motion tracking because the vendor most likely builds in assumptions about the kinds of motion that are likely. My favorite sample link for this type of thing is here: Valve Updated SteamVR Tracking Because 'Beat Saber' Players Were Too Fast

The only way OpenXR plays into this is in the API for getting a pose, which requires a time. The choice of timestamp to use is important: if you want the pose for rendering purposes, you use the estimated display time. If you want to know it for interaction purposes, you can use the timestamp of another event (a button state change, for instance) if applicable, otherwise the expected frame time is likewise probably your best bet. (If you have a fixed-rate physics simulation you might use a timestamp from that instead, but in any case, you should have timestamps flowing thru your whole system, there is no simple “now” in OpenXR intentionally.)

In terms of motion model mismatch, it seems like it might be useful for the tracker vendor to provide some way (whether thru OpenXR or an outside application akin to binding setup tools) to indicate or at least choose among options for motion model for a generic tracker. They might also offer some advice on tracker placement relative to tracked object: probably good rule of thumb is the closer to the object’s center of gravity, the better.

Ryan, thanks for your thoughtful replies, they are helpful. I will think on this some more.

Also, now that I have read your original post: There is almost certainly not a specific “measurement” as you describe it in your tracking system. As far as I’m aware (having built a few) tracking systems tend to be built from multiple streams of measurement data, with different rates, different latencies, etc. There might be a “measurement we won’t go back and revise due to incoming data” but there isn’t necessarily a single time-point when a “measurement” is made: tracking data you receive from OpenXR is a historical or model-based pose for a given point in time (where is that in your code? It’s a mandatory argument. Is this Unity and they’re just assuming you mean “the next frame time”?). If you ask for the pose for a given (future or near-past) time twice in succession you might (and probably will) get two different answers as new data updates the estimate the tracking system can make.

If you’re just looking to log data for behavior measurements, once per frame for the estimate frame time is likely fine if that rate is good enough for your needs. You could always spawn a thread and ask for pose data in that thread at twice the rate or something similar. Just be sure to use a consistent time offset in that case - use the frame time and halfway to the next frame time, for instance