Alignment of virtual environment to real environment

Hi, all
I am looking for a way to load in a custom mesh which I have saved using the Device Portal. I had the mesh loaded into Unity and then created different objects and placed them around the mesh in Unity.
The Unity project is aimed to be a short AR experience for HoloLens 2 with interactable objects appearing In specific locations specific to the environment the application is launched.
My goal is to have the custom mesh being loaded at a specific position, each time the application is started aligning the virtual environment with the real environment.

I first tried using the Azure anchors, but that didn´t seem to be suited for the task, I might be wrong (I am very new to this kind of stuff.) Then I read about “WorldAnchorStore” and this seems to be what I want. Can any of you guide me in the right direction?

Here is a picture of the custom mesh for reference:

image

1 Like

An update on my attempt to solve this:
I have decided to proceed using Azure: I will create a prefab that contains all my unity objects which I will place manually in an Azure session the first time using the HoloLens. And save the anchor to disk and load it from disk whenever the application is launched. I will limit rotations of the prefab to the y-axis and restrict moving the prefab to a single handed manipulation event to prevent moving it when rotating. And I might decide on adding buttons to adjust the position of the prefab for fine adjustment. this is a hacky solution and if any of you see an alternative. it would be much appreciated if you would share it with me. Thank you.

I agree with you. Using azure spatial anchor (ASA) in unity will likely give you better compatibility across multiple HL2 devices and allow you to share the location across users.

It is a typically workflow you mentioned that someone curate the environment by putting anchors and align your model to the environment. And later the user can recall the location of the anchor and automatically enter the experience you’ve saved before.

Looking at your unity model, it seems have some obvious sematic meanings. For example, you might know the “gravity” direction in your model, and the Unity’s scene origin’s Y direction is on the gravity direction, so you can rely on this alignment, and limit the curator’s action to a single rotation on “Y” axis. that’s a great idea.
Another example you might want the “floor” in the model to align in your real room. You can use the ARPlaneManager to find the planes with “floor” lable, to decide the height of your model to provide a reasonable guess instead of forcing user to adjust this height.

2 Likes

Thanks for the answer and confirmation,
I have now stumbled across another issue. I want the MRTK interaction to ignore some colliders or specific objects in my scene (like a filter/layer) as I am trying to have interaction with an independent object inside a collider, but the pointer hits the collider first and blocks interaction with the object inside it.

I think adding a layer that ignores raycasts fixed it

1 Like

For the MRTK questions, I suggest you post an issue on the MRTK team’s GitHub repo, they are fairly responsive to the new issues there.

microsoft/MixedRealityToolkit-Unity: Mixed Reality Toolkit (MRTK) provides a set of components and features to accelerate cross-platform MR app development in Unity. (github.com)

1 Like

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.