non planar projection


I have a cylindrical display(connected to my PC) and I would like to render a scene with a complete surround view, so that the fov would be 360°. I think I could do it by rendering the scene 4 times with a fov of 90° and an aspect ratio of 1.0, each time rotating the scene around the y axis by 90°.

The rendered views would be CopyTexSubimaged so that the result is a tiled image of the scenes surround view. This image will be transfered to my cylindrical display.

But I think there will be problems of accuracy because the cylinder will be approximated by only 4 faces. I could fix this problem by rendering 8 times(or more) with an adjusted fov, but that would mean rendering all again more than needed.(I could do culling)

I can imagine using a vertex program to project the vertices on a cylindrical ‘plane’ or to use cube maps,but
I would like to learn what you thinkt about it ?


I don’t know about a cylindrical display specifically, but I have a spherical display, and many of the same considerations should apply. I don’t know the means by which your cylinder works, but if it’s anything like the optically warped projection of my sphere…

The texture map trick can be made more accurate by applying your textures to geometry that matches the shape of your display. Think of it as generating an environment map for the surface of your display. Render a cylinder such that it precisely fills your actual display, and texture filtering will take care of the rest.

The unfortunate consequence of this is that there is no longer a 1-to-1 correspondence between the pixels you generate and the pixels that are displayed. Depending where on the cylinder a pixel is, it may be larger or smaller than originally intended. Individual pixels may even be linearly interpolated out of existance. On the bright side, your lines and edges will be straight.

The alternative, vertex programming, may or may not be superior. I have written a spherical projecting vertex program, and I definitely prefer it over texture mapping. In all likelyhood it will be faster.

The problem though is that the non-linear projection is done per-vertex and not per-fragment. Thus lines and edges may appear to bend. For finely tessalated geometry, the error is too small to be seen, but a single large polygon will blow the whole thing out of the water.

I’ve done some work on this.

The reason you think accuracy is a problem is because you have not considered render to texture then warping the image back to the framebuffer to correct for the distortion. This is called distortion correction. Some displays have this built into the projectors others have it built into the video hadrware of the computer.

The main problem with the image approach is the resolution, not really accuracy. The projection produces a tan theta distribution of pixel dimension. So the resolution in the center of each rendered image drops off as a function of the field of view.

So, the reason to add more channels is to improve on the resolution in the center of each image.

You have not stated how your display works. How many video images do you send, what are the display capabilities (can it perform distortion correction). Any solution is an integration of the graphics video and display system.…orbie&RS=dorbie

This is the most relevant diagram but it may take some reading to understand:…26u=/netahtml/s earch-bool.html%2526r=1%2526f=G%2526l=50%2526co1=AND%2526d=ft00%2526s1=dorbie%2526OS=dorbie%2526RS=dorbie


At first thank you, you gave me useful hints.

The display is currently under development at our school (I’m student of computer science). It is a line of 32 LED’S that will rotate around a vertical axis. With a microcontroller we will switch the LED’S on and of so that we can display a non moving image. The first version will just have LED
on/off. Later there will be gray shading.
Resolution would probably be around 32X256 pixels. You look from the outside of the cylinder. Later we will us a camera to display the real surrounding,the efect would be a kind of low resolution mirror of the scene.

We are still thinking about how to transfer data to the display but at first time for testing probably there will be a serial interface used, so the need for a high frame rate is not there actually.The displays controller has only the task to recieve a ‘framebuffer’ from the pc and ‘render’ it.

The images I would like to generate would be for testing the display.


I also thought about first generating the environment map(4 - 8 faces and then rendering a cylinder with that map and reading back again 4-8 faces, and assembling them into one image;

I looked at your links, interesting,but I think it will take an amount of time to fully understand them, they seem to handle nearly all cases, I just use one simpler version. Btw, I wasn’t able to view the images.

I will play around with and let you hear if something worth will be the result.It will take probalby some time because the next exams are just on the horizon


The images appear to require quicktime. Don’t ask me why.

Originally posted by dorbie:
The images appear to require quicktime. Don’t ask me why.


Thanks for the hint, I will check them as soon as I have Quicktime installed

Bye ScottManDeath