Efficient way to slice an object

I have an object that I am cutting up into very thin 2D slices. However, it is taking longer than I’d like, and I can’t help but to think there may be a more efficient way.

The following are the steps I’m taking:

  1. Create VBO of object
  2. Set projection to orthoscopic with a very thin clipping window
  3. gluLookAt() to the edge of the object (curX = edge)
  4. Render image with glDrawElements()
  5. Swap buffer with glutSwapBuffers()
  6. gluLookAt() to the next slice of object (curX = curX + 0.1)
  7. Repeat step 4-6 until it reaches the other end of the object

In summary:

  1. I need to display every single slice
  2. The object is not changing only what’s being clipped.


  1. Is setting the projection to orthoscopic the best way to “slice” the image?
  2. Is there a better way to slice the next image? For instance, without translating the object, or utilizing VBO with some sort of matrix buffer for each slice.
  3. Is it possible to render the image once and do post-processing to extract each slice? I was thinking the depth buffer may be a clue.

First, “orthoscopic” is for surgeries, not projections! You mean “orthographic.”

You should be able to use whatever projection transform you want. The questions is, what do you want?

I haven’t yet experimented with this OpenGL feature, but I think it may be tailor made for what you want to do: user-defined clipping planes. To use them, load all your object’s vertices on the GPU once and then for each frame send an uniform with the characteristics of your desired clipping planes. In your case, I think you will want two user-defined clipping planes, and they will be parallel to each other. I think it should be very easy to do, and very efficient.

Whoops lol, yeah, orthographic! I am currently clipping the image whenever I call the glOrtho() function, and then translating the image and rendering for each slice. My question is, is there a more efficient way to do this??

You can try to draw only what you’re looking at. If you do so, OpenGL won’t have to reject (by clipping/culling) all what’s not seen.

Can you elaborate what you mean arts? Do you mean only passing the vertices that are visible to the slice? If that’s what you mean, isn’t that the same effect as clipping?

I do need to repeatedly slice the image and render the image to the screen. But, I was hoping I could do it without render, translate, render, translate.

It’s not exactly the same effect. If you have many vertices in treatment, quick software reject will be faster than using OpenGL clipping/culling.

From what I understood, you need to show your object slice by slice. If so, maybe it’s easy for you to know what vertices you have on each slice, so you’ll only have to render the slice you want. And then, your fillrate will ‘always’ be low, resulting in higher fps.

I’m not sure about what you say in your second paragraph. 3D rendering with GL is all about defining your view then rendering the view and so on.

You seem to have disregarded the user-defined clipping planes suggestion I made previously. Have you looked into what they do and how they work? I can’t think of any more efficient (or simpler) way to do what it seems to me you want. I think their whole purpose for existence is to do just the sort of thing you want to do.

As far as doing clipping/culling on the CPU and then transferring vertices for each slice each frame to the GPU, that may reduce the amount of work the GPU has to do, but at the expense of transferring that work to the CPU and then having to transfer at least twice the original model’s geometric data each and every frame to the GPU. That’s not taking advantage of the GPU’s capabilities well; the idea is to transfer work from the CPU to the GPU, not the other way around.

I’ve just given this a little more thought. The original poster’s approach, and the user-defined clipping planes approach, both require one pass for each slice. If you have a lot of slices, that involves a huge amount of repeated work: you have to process all vertices in the model for each slice, whether on the CPU or on the GPU. If you want the most efficient method, you need to eliminate all multiple passes through your model, and render a single time per frame, rather than once per slice. To do that, you can slice your model on your CPU each frame, but do it in a single pass, building a model for each slice (adding a slice-specific translation term to each vertex in each slice). This is a type of bucket sort algorithm, with one bucket per slice. Once you’ve gone through your entire model once, send all the geometry for all the slices to the GPU en masse in a single VBO, and issue a single render call. The more slices you have, the more efficient that will be.

Sorry, I wasn’t disregarding your clipping plane approach. I was using glOrtho() to achieve the same result.

Your approach makes sense to render only once. However, I’d need to be able to display each slice. If it’s rendered once, it’s flattened and I’ll lose the ‘3D’ data.

For a little more context, I’m trying to render an image but be able to reconstruct a 3D model. The way I’ve come up with is to slice it up into many slices. So, each rendered slice(yz) along with the slice number (z) can reconstruct an xyz model.

I can’t really think of another way to render a model and keep the 3D data. Any ideas?

Btw, I do like the idea of slicingpart of the image on the CPU. I may just divide it into halves or thirds so that each clipping deals with less vertices and ‘fill’

I’m starting to get a better picture of what I think you want… sort of like an exploded diagram of thin slices from a 3D solid, like what a MRI scan provides, where you have a stack of XY planar images, each slice with a constant Z, but a different Z from each of the other slices? If so, do you want to display it like this: slice the data (maybe just once) and send all the geometry defining each of the slices to the GPU in a single VBO. Perhaps the user will be able to rotate or zoom the view to look at the stack of slices from different angles (helping to visualize the 3D sense of solid data). Maybe the user would like to vary the Z distance between slices interactively, too. You could still make use of user-defined clipping planes to let the user clip various planes, or portions of all planes, to get any cross-sectional view desired, if that’s something that would be helpful. You could use the ModelView matrix, updated each frame, to implement the rotations and so on, and also the variable delta Z distance between the slices (simply by changing the Z scale factor in the ModelView matrix…

When I wrote “rendered once” (in the second to last sentence) I meant all slices will be rendered with a single glDraw*() call, as opposed to rendering each slice with its own call to glDraw*(). The data will still be 3D as I described it (i.e., slicing it with a bucket sort type of algorithm).

I think what you want to do can be done very efficiently, and very flexibly, without too much trouble (nearly all your programming work will involve slicing the model, since you’ll need to deal with geometry that’s split across slices). On the other hand, if you want to actually change the thickness of the slices, then you would have to redo everything from the start, and that would be slow if you need to do that every frame.

You hit the nail on the head! Thanks for your input on this. I’ll have to figure out the software slicing algorithm now.