Perspective/Ortho Projection for Volume Rendering

Opengl_Experts,

I am developing a Volume Rendering (using GPU) using ray Casting techniques.

Initially I defined a Perspective view Volume and casted a ray. Result I got is what I desired ( i.e a 3D Volume Image )

Later when I changed the view volume from perspective to ortho, Image generated was quite ambigious. Image looks like as if it is torn apart.

My question is whwther change view in volume changes affects Ray casting algorithms.

Thanks in advance.

When you cast a ray in a perspective projection you start from your eye point through every pixel which automatically results in a perspective shortening in the distance.
For an ortho projection all rays must run in a parallel manner through the screen pixels. They must not intersect in the eye point.

Also mind that ortho projections can have negative zNear and/or zFar values, (e.g. containing what would be the eye-point in your perspective projection). Starting from there with perspective ray-casts would give completly weird images.

Thanks Relic.

Whether ray casting will miss (or skip) few voxels in the case of perspective projection since it starts from eye point.

Are the number of rays casted in both cases same?

And you mentioned rays run in a parallel manner through screen pixels in the case of orthographic projection. Does it mean that, rays are casted simulatenously ( independent of each other).

Please explain more. I need to improve the performance of my volume rendering app.

Thanks in advance.

What relic meant was that with an orthogonal projection the rays are geometrically parallel:
Ortho:


| | | | | |
| | | **| |    <- object
| | | | | |
| | | | | |
| | | | | |
----------    screen
 

Perspective:


\     |     /
 \    |**  /  object
  \   |   /
   \  |  /
    \ | /
     ---      screen
 

What this means is that if you cast rays at fixed intervals in an orthographic projection, you won’t miss any objects that are big enough.
But if you cast rays at fixed intervals in a perspective projection, the further the object is away, the larger the object has to be to be hit.
Since you mention voxels, this may be a problem - assuming a regular grid.

Rasterisation does not do raycasts, it does pretty much the opposite, so there is no means of missing an object.
If you do volume rendering by means of raycasts, any parallel execution will have to be done the hard way: by your own code.

If your voxels are all overlapping spheres, you may be able to use point sprites - intersections and frustum clipping won’t be correct, but it takes only a single vector to render a voxel.
And you should be using VBOs, if you aren’t already. The VBO would contain only the ‘filled’ voxels.
If your voxel shapes don’t change very often, you won’t have to update the VBO very often.

Another possibility may be to make use of geometry shaders - if supported on the hardware.

Yes, thanks for the ASCII art.

You basically only need to render a frontmost plane (actually the frontmost sides of your volume’s cube(s)) and iterate from there. No need for advanced methods to send that geometry, that fragment shader performance is what counts.

The number of rays cast per pixel is your choice. Normally you start with one per pixel and that means ortho and perspective have about the same number. A cube in a perspective view is smaller if farther away. If both fill the entire screen the number of rays is identical of course.

Rendering 3D volume textures by means of raycasting had been done before. Just search the internet for “volume rendering” and “volume raycasting”. (One of the experts is on this forum as well and helped you before.)

I’d recommend you look for a software package which already implements the basics to be able to concentrate on the raycast shader implementation.

Performance optimizations for the raycast are possible, for example search for “empty space skipping”, which really is an advanced topic.