Projection Caves, OpenXR spec

Hi,

Who is leading the effort for the viewport-configuration of Projection Caves?

I am working with single-projector caves using fisheye lenses.

I am particularly interested in a needed feature, the 2D-warping of the output image so it will match the optical fiducials of the cave.

The output image stream must be calibrated to the projector and cave, a unique warping specific to the dimensions and shape of the room.

Essentially, texture mapping a square fisheye image to a rectangular image that is immediately input to the projector.

Matthew

There are no WG members publicly working on any CAVE-related extensions, though several of us are interested. You’re welcome to start drafting an extension (or opening an issue for discussion) on the OpenXR-Docs github, I’d be happy to let the interested parties (besides myself :smiley: ) know.

In terms of the output warping: that is squarely in the realm of the runtime, the application does not need any awareness about it. You might consider starting with the open source Monado runtime, which I help lead development of. https://monado.freedesktop.org/ I think we should be able to handle the warping without any api changes, I think the only change we might need is having the fov change each frame (due to tracking).

In the development of openXR cave projection, I would recommend taking a look at Extensible 3D (X3D), Part 1: Architecture and base components, 42 Texture projector component.

Some other parts of the X3D spec might be useful in constructing the openXR spec.

The texture mapping is particularly Important for projection in caves, because it visually calibrates the physical last step to the user (i.e., projector, lenses, cave shape).

I will open an issue on GitHub.

Is this a reasonable link for what you referenced? Extensible 3D (X3D), ISO/IEC 19775-1:202x, 42 texture projection component That is the kind of thing that sounds useful for runtime implementation, particularly generic implementation, but that OpenXR is unlikely to actually expose to the application/user, thus outside the scope of the spec itself.

In terms of application API (the OpenXR extension required):
My current best thought is that we create a view configuration that is basically “bag of n views” where the number of views is variable. Alternately we may have a “bag of n stereo pairs” where each of the pairs has two views that are fairly closely aligned such that things like jointly culling those two views and/or using a multiview rendering extension might be useful.

In terms of implementation (because I’d love to have Monado work on CAVEs):
If you were to describe your CAVE in a way that would let me (a runtime developer on Monado) render to it, how would you describe it?

  • Would it be using an X3D file? (Didn’t know it could describe that, pretty nifty - does any existing VR runtime system use a file like that as a config?)
  • My previous experience in CAVEs is mostly VR Juggler based, which has an XML format for describing view planes (no distortion!) or tracked (e.g. head worn, though I think there too, without distortion) only. e.g. vrjuggler/C4.displays.closed.jconf at master · vrjuggler/vrjuggler · GitHub
  • Based on my more recent work with OSVR, OpenXR, and Monado, I’d generally/generically say that I’d want to know a virtual display plane location/size (of which the real display is a subset, I think) and a distortion map/mesh of uv pairs. The distortion map would be just as we use for distorting HMD images, and e.g. might actually be a full-size array: same dimensions as the scanned-out display content. Our existing distortion pipeline basically computes each pixel in the scanned-out image by sampling the distortion map at the same coordinates to find the u,v pair to use when sampling the flattened, composited layers (basically, when sampling the texture that the application rendered and submitted).

References:

  1. the link is what the cave at Virginia Tech gave me when I asked about standards. They are active in developing X3D & webVR. I would consider the most advanced thought on the subject. For openXR it provides some insights as to practical existing implementations.

  2. I am working with single projector dome caves using fisheye lens, monoscopic & stereoscopic. VT is working with 10 projectors in a cube cave, stereoscopic. For the entertainment world of dance events in large domes, they use 5-10 projectors, monoscopic. Museum planetariums can be 1-10 projectors, usually monoscopic.

  3. I generally see several markets: academic, museums, professional viz, dance events, and home theaters. My interest is home theaters, or similar rentable event locations. My attitude about home theaters is the fisheye projector needs to be in the back, like with regular home theaters, with directional seating. I could see two fisheye projectors in the back giving better coverage. Most other cave designs will disperse the projectors throughout the cave (interior or exterior). Single projectors only work in domes less than 10m, 5-7m is more typical. For dance & large dome events there are about five VJ and three pro planetarium software packages that do the slicing & calibration.

  4. most of the single projector systems use Paul Bourke’s 20y old strategy of warped fisheye. see paulbourke net section on domes. Caves trace back to the 1990’s out of UIUC/NCSA NSF research grants. Paul was the first to get caves to work on a single projector on a curved surface. If caves are going to reach a large market, his method or similar is probably the means, i.e. single projector.

  5. particularly for single projectors, the texture map is the means to integrate & calibrate the projector, lens design, and cave shape. Most multi projector systems tend to use more conventional lenses, effectively planar images; this may change. Planetarium manufactures are creating no-projector systems using curved seamless LED displays, but the cost is astronomical, but have great contrast and resolution.

  6. " how would you describe it?" For the home theater of the future, you need to show conventional content (e.g., netflix) and dome movies. This may be the bulk of content. For VR, whoever is wearing the headset is the same, the cave becomes the immersive tool for creating stadiums to watch VR, so it is just streaming 360 video; maybe some interactivity to change the viewing camera (e.g., referee, birds eye, player perspective). For other uses and needs, getting the academic cave people and professional flight simulator designers involved is needed. Planetarium manufacturers such as E&S are developing VR, getting their input into openXR is critical. The professional trade associations such as Imersa and Digital Cinema Initiatives should be involved.

  7. the relationship between projectors, lenses, and cave shapes are the wild cards that most likely can be managed by texture mapping. Of these, the lenses are at the essence of the needed texture map transformation. Lenses have the greatest potential for future optical design changes. Optical modeling, simulation, and manufacturing are creating lenses that would have been inconceivable or impossible 10-20 years ago. Aspheric freeform lenses are creating new possibilities in VR headsets, and will most likely resolve focus issues on curved cave surfaces, as well as other Zernike aberrations. Some fisheyes are approaching 270 degrees on FOV, others are getting much better MTF. The important thing to keep in mind is that the texture map is unique to the optics of a location, the aberrations of lenses and projector performance. The texture map provides the best calibration that can be expected for the uniqueness of a cave. Caves for the foreseeable future are one-off unique constructions, not mass produced optics as in VR headsets.

  8. “The distortion map would be just as we use for distorting HMD images, and e.g. might actually be a full-size array: same dimensions as the scanned-out display content.” yes, similar if not the same. The placement may be different in the pipeline because of the potential for multi projectors, use of HDMI, and stereographics.

Hi All,
As a CAVE Hardware provider this is of great interest to us, but we have no clue about how to start with OpenXR, do we provide our CAVE specs (we have multiple and is extensible so customers can run custom setups), do we write software apps to interact with OpenXR, how would someone run an OpenXR application in our environment, etc.
Would love a call or a link to a simple ‘how to’ guide for enabling multiple displays so that we can jump on this ASAP!
Kind Regards
Max

Hi Max, what is your website?

Matthew

Hi @Lubaantune

For some reason I cannot post a link, the forum keeps blocking me.
If you search for FULmax you will find us :slight_smile:

Regards, Max

You’d want to build a runtime for OpenXR, which would probably start from Monado (our open source one) and you’d also need an OpenXR extension to add a form factor and view configuration. There are several interested, so if you come to the github or (better) join Khronos, we’d like to get it going for you. I haven’t used a cave since 2014 but I am still nostalgic…