realtime perspective anamorphic distortion...

I am working on the foundations of a concept that would allow an observer to view a virtual environment, not on a 2D plane but within 3D object.

The concept in essence is very simple. To explain the idea easily, the following is a possible user scenario:

Within a space is a white cube. Near the cube is a projector set up to project imagery over the cube. Connected to the projector is a computer that is also connected to two tracking cameras set up to track the observer’s eyes. One camera is to track the vertical position of the eyes and the other camera is to track the horizontal position of the eyes.

The system is designed to project over the physical white cube another virtual cube of the exact same dimensions, so to perfectly overlay it. There is an example of this on my website here:

In the videos the illusion looks good but from only one position, that is of the same position as the projector. If the viewer changes position, the imagery becomes oblique and the effect is destroyed.

The idea is to have the dimensions of the imagery transform in realtime according to the shifting position of the observer, so the projected imagery always appears as being of the correct proportions. This way the observer can view the virtual environment projected to appear within the cube, from any given position within the space. For the observer witnessing this effect, an illusionary sense of depth within the cube is created, making the virtual environment appear to be physically inside the cube.

I have reached a stumbling block in the development of this concept, as I don’t have the necessary programming skills to distort the imagery correctly.

I am wondering if anyone can give me an idea of what is the best way to approach this? As many of you are already working with 3D environments can you visualize how the imagery would distort? Any help would be really appreciated. Thanks.

The best keyword that describes your setup is CAVE.
background article on CAVE setup
CAVE Quake, etc.

The distortion you need seem to be called “off-center projection” or “off-axis projection”.

Search in the following page the “correct method” to get a better grasp of the thing :

There seem to be quite a lot of open source libraries to do this work.

There is also a library that does exactly what you want, but it’s quite expensive:

Open Source libs :

virtual mass, aside, I expect you’ll have serious brightness issues from one projector on the more oblique surfaces as well. Your real cube material will need to be very rough to scatter light from all angles.

I bring this up because as people move around the cube, away from the projector, it will only get worse. One thing you might want to imagine is building an armature that allows a viewer to move the projector in a sphere, so that it stays near their real pov. [edited for clarity] That would also solve your off-axis projection issue with only minimal programming. – that or use more than one projector…

In that case, you are indeed talking about an inverted CAVE [edited for clarity: each projector aligned with a real cube face], which would need 3-4 projectors to allow full viewing.

If you are using a simple cube as physical surface you could calibrate your projectors to it - this would allow you to use 1 projector projection onto 3 sides and another one projecting onto the other 2 which should be enough for your setup (assuming that the 6th side is on the bottom).

If you use a completely white lambertian surface, the result should work quite well…

The correct rendering can be realized with off-axis-projection as ZbuffeR mentioned…

Cave rendering is trivial. Simply use glFrustum to define the correct frustum to match the viewer’s relationship with each display surface.
I’m sure a cave is not what the poster is asking for having seen the video on his site.

He needs distortion correction to correct for the distortion of the projection on the cube regardless of the viewer & projector location, which is a bit more complex that simple frustum adjustment for a tracked viewpoint.

to implement distortion correction you need to correct for the projection back to the viewer. So you need to render the scene from the viewer, project that onto the 3D display geometry (your cube) then back project that into the display projector. Obviously you need to understand the spatial relationships of all the elements. This has been done and there are even libraries to do it, bot none specific to your task.

I filed this on related ideas a while ago just by way of illustration.

All other answers in this thread are simply incorrect, I’ve done this it is NOT cave rendering, it doesn’t need multiple renders and it can’t be done with simple off axis projection. You have a rather advanced graphics programming problem which I wouldn’t trust most graphics programmers to implement.

I would do this by modelling the 3D geometry of the display surface, placing the virtual camera at the projector system then rendering the scene from the tracked view position to texture. You then use projective texture to project that virtual texture onto the 3D geometry when you draw the display geometry.


  1. render scene from virtual tracked viewer using render to texture

  2. set up projection of that texture from virtual viewer onto virtual display geometry

  3. render virtual display geometry from virtual projector position with projection in 2) active.

  4. send the result to the projector.

Voilla, the beauty of this is that you can then move anything and it’ll still work if you can get tracking information into your graphics system, move the projector, the viewer or the display and it can all be made to work and work for any display geometry too.

You will need graphics skills to code this even with my description.

All other answers in this thread are simply incorrect
Angus, please, when I was talking about using 3-4 projectors and inverted CAVE rendering, I thought it was clear enough that those projectors would be cube-face aligned, as with the original CAVE.

In that case, there are no surface geometry distortion or brightness issues to correct for, other than the usual uniformity and edge-matching issues. However, head-tracking and off-axis projection is still an issue, and was addressed earlier in the thread. But again, there would be no complex distortion-correcting rendering required.

[btw, there is a way to do the inverted cave thing with only one projector using mirrors, but it’s even more complicated, has more focal issues for common lenses, mirrors in sight, and I’d rather have increased resolution any day. Not advised.]

In the case of a one-projector moving armature, I wouldn’t personally prefer that route, as the armature would cost as much as some projectors. It was more to illustrate the brightness issues with a moving observer and a stationary projector, vs. the ideal.

BTW, if that one projector is very closely coupled with the observer’s pov (as I’d intended, if it wasn’t clear), I believe you can safely ignore the surface geometry issues as well, though not the reflectance/brightness issues.

But in neither case I mentioned do you need to do what you’re talking about. It’s overkill, unless you’re only talking about something like using one projector to hit three sides of a cube or some mor complex surface geometry, which I was trying to suggest was not the way to go for other reasons…

Anyway, so much for trying to make a simple point.

BTW, I didn’t take your dig at “most graphics programmers” personally, but maybe go out enjoy the sun for a while.

Thanks everyone, thanks Dorbie, I have been searching far and wide for some knowledgeable responses.

I realise there are some very tricky hurdles to overcome before it is realised. As you can see from my videos, when projecting over 3D objects, the virtual cube magnifies the further it is away from the projector, so it never truly overlaps the cube correctly.

Dorbie, to have a freely moveable projector / cube would be incredible. I am not a programmer and from the complexity of this I am thinking it may be irrational to think that I would be able to achieve this myself anytime soon. Therefore I am strongly considering submitting a proposal for funding to allow for the collaboration of a programmer.

Is there anyone that would be interested in designing a flowchart or blueprints describing exactly how this concept can be realised?

Cyranose, it is not overkill, it is the question he is asking, follow the links and watch the video. Having looked at your answer I think you got it but you trivialize the significance of the projection distortion. Unless the projector is at the viewer location you need to correct and correction involves everything I have described unless the projector is orthogonal to the cube faces and then it’s many projectors, again not what is being requested after looking at the video.

Sorry for inadvertantly disparraging your response, I should have paid it the attention it deserved.

virtual mass,

this is quite simple for me to write and probably easier than explaining it to someone to the point where they can competently write it. It would be difficult to find the developer anyway IMHO.

I have explained distortion correction via email before and it has taken scores of emails with diagrams and frankly is invariably exhausting.

Since I have a personal interest in this kind of stuff (you have a cool project) I can send you code for this but I do not have time right now.

I will need a very simple description of the dimensions and location of the screens and cube geometry and the default spatial relationship of the projector and viewer and the projection field of view and orientation of the projector.

email me at: angus at dorbie dot com

I am not going to enter a protracted development though. At some point in the next few weeks I will write some simple code that demonstrates this probably on Mesa or raw WGL and throw it at you, it will be to you to catch it.

You should understand that w.r.t. publishing this kind of research has been published in Siggraph before although I don’t know the context of your research.

Any geometry can be handled, occlusion from the projector is the main issue. Multiple projectors could help there, but that would be slightly more advanced.