Generate wrist/elbow/shoulder etc. locations? Extension?

OpenXR has joints for hands, but it seems there should be a software system to complement this that would just recommend a skeleton for the rest of the arm given the hand position/orientation and head position, not involving tracking (unless that’s a thing.)

I mean, this seems like a best practices thing OpenXR ought to cover given the rest of its scope, since it’s not obvious how to (best) divine these things, e.g. to animation a body double character model.

The desire for a skeleton extension, to complement the hand tracking extension, is something I share. I’m not sure a full inverse-kinematic model of the arm based on just the hand pose is necessarily in-scope for OpenXR, it seems more like it might be a creative decision or an engine decision rather than something the OpenXR runtime would have any specific knowledge about. But, I’ll pass this request along. Feature requests are best submitted to Issues · KhronosGroup/OpenXR-Docs · GitHub since those issues are regularly synchronized to Khronos systems and brought up for the working group to address.

1 Like

I can understand that (I disagree Khronos tends to make good decisions) but I feel like this where “standards” are needed to step in because people’s bodies shouldn’t (ideally) be a matter of “creative engine design” or at least there needs to be a non-creative option developed by authorities with access to lots of data and testing and medical considerations. (Edited: Or at least the sum total of the game industry if nothing else.) That’s my thinking anyway. It seems like an ethical domain, like architecture/construction or medicine to a degree.

Edited: Of course it doesn’t have to be OpenXR but there should be resources, as long as there is any skeleton API there it seems like a logical fit.

For sure. I can’t mention anything about what the WG is working on (note that there is more or less no single Khronos decision making, each WG makes its own decisions, subject to some confirmation by the board of promoters), but I am free to talk about what I would like to see, and I would like to see a good skeleton API akin to the hand tracking one. I’m not sure there will be interest in the inverse-kinematics part of the equation, but maybe there will be - this is probably more likely if there is a good reference or permissive open source to build on. Accessibility concerns are also an issue that would need to be addressed, accounting for differences between people, designing an API that requires an app to work flexibly and be able to deal with e.g. fewer limbs, etc, whether presenting as an avatar or being used for interaction. Fortunately the group has a variety of backgrounds and experiences, and while we’re not infallible, we do try our best to account for things.

It might be a case where a pure software (quasi official) support API would make sense to take the load off of core OpenXR that would deal with device support. Who knows. There’s a long history of that kind of thing. I.e. GLU I guess, or D3DX on Direct3D. In OpenXR’s case it seems more critical because of the intimacy of VR devices.

EDITED: While I’m here I want to add something, I realize most VR apps are implementing hand visuals as cut off at the wrist. I find that a little bit disturbing, but it removes the need for elbows, etc. However in my product I don’t feel that’s a good fit, since the arm is very important to it and it has a history before VR and works on a monitor too, and being first-person the arm is about the only evidence of the character in the game. Without hand controls its easy to use stock animation for the arm. But it’s unacceptable to severe the arm in VR mode.

Yeah, I agree there is for sure a need for a common helper library for arms.

It should be noted that this history is not exactly comforting.

D3DX was pretty much never used in a professional capacity. Hobbiest programmers or people learning D3D would use it, but anybody writing professional code would mostly avoid it.

D3DX was also rendered more or less irrelevant because it doesn’t support D3D12. That is, the API moved on, but the library was never updated to keep up.

Both of these maladies afflicted GLU. As fixed-function OpenGL died, so too did GLU. And outside of things like gluPerspective, it was rarely used by professionals even when it was a viable library.

I support the idea of such a library, but I’m not sure it’s something that should be so tightly associated with OpenXR.

D3DX I think is fine, it just got replaced. I’m sure a lot used it for text (or even line drawing) for example. GLU to me always seemed a little questionable, it may have had very useful features, but whenever I find an app with GLU I always remove it and find that nothing vital is lost, but it’s the original example I suppose. It’s just a matter of principle to keep the software stuff in a single module and even use it as a test bed. With Direct3D eventually the driver was split so Microsoft would implement the client facing side so vendors could focus on their part (OpenGL may have always been this way?) that’s an alternative I suppose, as a way to package them together without an extra library. But with software modules it can be more a la carte too.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.