3D image compounded from series of 2D images

Hello all,

I’m brand new to openGL. I need to develop a program that can take in a series of 2D images (raw binary format) of a sample taken one after another in parallel, and construct a 3D image out of it.

For this purpose, is openGL right for it? (I’ve used Intel’s performance primitives (IPP) for working with my 2d images such as interpolation, edge detection, filtering etc etc. )

Volume rendering?

Great book, tutorials, and some code samples in GPU based volume rendering here


There was an awesome Siggraph 2004 course by the book’s authors online but I can’t seem to find it. (Superseded by the Eurographics 2006 tuts anyway.)

State of the art course here


Not exactly beginner stuff, but if you search for GPU based volume graphics/rendering you’ll hit paydirt.

Thank you for the great links. I’ll give it a read.

but as I can see from the sites, it is pretty advanced graphics. I don’t need any shading, or light blabla for my project.

For example, I have two slices of grey-scale images at position 0mm and 5mm. I just need to take pixel#X of image1 and compare it to pixel#X on image2, and do a linear or cubic interpolation to “guess” what value i should but into pixel#50 into the empty space inbetween image1 and image2. And lastly display the whole volume.

I know VC++ has like Cbitmap to display 2d images (which i’m using so far), but will I be able to display a 3d object without the help of openGL?

THank you!!!

If I understood correctly your needs, you only need to fill a 3D texture as a stack of 2D images. Then draw quads with appropriate 3D texture coordinates, interpolation will be done by the hardware.
Something in the lines of :
This is very simple, the only limitation will come from the size of the 3D texture compared to your hardware. Typically 512512512 can be used, for larger value it will depends of your hardware, but you may need to split the rendering in multiple passes.
But in your case, it looks you only need XY2. you could use a simple shader to interpolate between 2 classic 2D textures, each can be much larger such as 4096*4096.

What is the typical size of the datasets you work with ?
What is your 3D hardware by the way ?

Other documentation about hardware-accelerated volume rendering :
(especially Introduction to GPU Volume Rendering 1 & 2)
http://www.cs.utah.edu/~jmk/sigg_crs_02/courses_0067.html (some links seem down though)