volume rendering suggestions needed
I’ve been experimenting with a few different volume rendering techniques over the past week or so and find myself realizing how little I know. While I’ve learned a lot, I’m still without a solution to my problem and really don’t know which way to turn.
Here’s a representation of the problem I’m trying to visualize. For example, suppose I have data representing ocean temperatures from the surface to 1280 meters below the surface. This data covers a surface area that is 256 kilometers by 256 kilometers. From that I can build slices of scalar values…each slice is 256 x 256. There are 128 slices to yield a volume of 256 x 256 x 128. Therefore, the resolution of each slice is 256 x 256 kilometers and the vertical resolution or spacing between each slice is 10 meters. One note to add, I can sub-sample (correct wording?) the data to just about any resolution I want. What I mean is, if I need a grid that is 256 x 256 @ 100 meter resolution, I can do that as well. I can also sample at an arbitrary depth. Meaning I can create a volume of size X * Y * Z that represents any resolution. If I want to extract an area that is 256 x 256 x 256 with a resolution of 1 meter x 1 meter x 1 meter, I can do that. That part of my code is very flexible.
OK, that’s the data. What I want to do is use this data to render a volume. Specifically, I want to extract iso-surfaces. For example, suppose my volume of data has the value 1 (as in Celsius) distributed within it in such a way to form a cylinder. I’m trying to get to the point where I can extract this information and visualize it in a manner as to produce a polygonal cylinder complete with normals.
I’ve experimented with a few techniques with little success. I haven’t gotten marching cubes to work well and I’m hesitant to spend a lot of time on any one technique because I’m really not sure what the best direction to go is. I’m not really attached to any given method. I don’t even have to generate polygonal models I suppose but I do need some way to drill down to a given iso-surface and visualize it.
The hardware I have to run this sort of sim on is what I’d consider high-end. Dual Xeons with nVidia FX 3400 PCI Express cards. My volumes aren’t that large either. My largest volume would be 512 x 512 x 256. Most will be 256 x 256 x 256 or 256 x 256 x 128.
So, that’s it. I’m not up to data on the latest techniques in this sort of thing. If anyone is willing to offer some advice, I’m listening.
Have you checked VTK (Visualization Toolkit)? You can find the source code for Isosurface algorithm, especially, “vtkVolumeRayCastIsosurfaceFunction” class.
I have not tried to implement any volume rendering algorithm, but here is a screenshot of my program using VTK. The upper-right window is Isosurface:
I have done iso surfaces before by rendering the volume as series of screen aligned quads, rendered from from to back.
Then use glAlphaFunc (or a shader with discard + uniform) to clip out intensities below the target value you want, the result is a iso surface.
Computing a normal map for the model also gives you nice per pixel lighting on the iso surface.
The nice thing is that you can change the iso surface at runtime just by altering the alpha func value or the uniform controlling the your shaders discard path.
There is no computation or updates on the CPU making it light weight, but if you need the geometry itself for other work you’ll need to fallback to conventional CPU based iso-surface computation.
The code to do this in in a simple example called osgvolume in the OpenSceneGraph distribution.
If you do not need a polygonal representation of the isosurfaces you could use pre-integrated isosurface volume rendering:
You can download a demo from this page that shows the technique (start runme3.bat). The technique works for texture-based volume slicing and for texture-based volume raycasting.
This gallery shows the lookup table that is used and the results:
If you want more info about volume rendering techniques you can download slides from our course here:
Unfortunately, there’s no source code online. Our book “Real-Time Volume Graphics” contains some source code for pre-integrated volume rendering.
For isosurfaces it’s quite simple. Sample the volume textures by raycasting or texture slicing. Take two neighboring samples along the viewing direction and use them as 2D texture coordinates into a 2D lookup texture. The lookup texture contains the isosurface color and opacity for each combination of the two samples, e.g.
sample1 <= isovalue <= sample2:store color of front side of the isosurface, alpha = 1
sample2 <= isovalue <= sample1:store color of back side of the isosurface, alpha = 1
isovalue < sample1 <= sample2: no isosurface, alpha = 0
Here’s another very good paper on advanced isosurface rendering on GPUs:
Real-Time Ray-Casting and Advanced Shading of Discrete Isosurfaces
great hint to that austrian site, lots of cool stuff over there.
Thanks for the replies everyone.
Also, I’d like to recommend a book I purchased:
Real-Time Volume Graphics
Very good book.