Before I rush headlong, some advice please

Hello all, I’m going to start with an apology. I suspect that this may be a long post, and it may take a few minutes to read through to the end. But when you do reach the end, rest assured, your response can be as short as, “Yes, OpenGL can do that”, or “No, I’m sorry, OpenGL won’t do what you describe”. (If you’d like to expand on those example answers, that’d be great.)

So I’m at a point in a software development project where I must decide whether to stick with what I know, or try to solve a particular problem in a new way. Basically what I’m doing is writing a printer driver for a 3D printer. My ultimate output needs to be a series of bitmaps that tell where the printer should add some ‘ink’ on a layer by layer basis to build up the solid model.

In the past, I’ve done this with a comibination of third party software and the Windows GUI+ functions. The user data is normally stored in an STL triangle tesselation. STL is an extremely simple format, and no adjacency data is stored in the file; the only thing you’re guaranteed is that the model is water-tight. The file is loaded into the 3rd party product. This slices the triangle data, using planes projected at defined intervals. The locations where the plane intersects the triangles are collected in a 2.5D file, SLC or CLI.

A DIB is created and scaled so that the size of the pixel represents the size of one drop of ink, and there are enough pixels (voxels to me) to hold the size of the object. The 2.5D contours are read in, and the shapes are drawn as closed polygons onto the bitmaps, in either black or white as defined by how they’re wound.

The bits are deinterlaced and sent to the printhead, where the ink causes a substrate to bind together in layers. That’s the basics anyway, and the particulars of the file formats and loading mechanisms are moot for my question. I’ve already written loaders for all of the file types, so that’s not a problem.

The problems are two-fold. First, the files I see are getting more complex. As people become more familiar with the technology, they push it to do more things. Computer artists are going nuts. Some files contain over 10,000,000 triangles. The second is the time slicing takes. No matter how else you do it, you either have to sort a bunch of triangles, or order a bunch of triangles, or determine all kinds of adjacency data. It just takes a long time for the CPU.

My understanding of GPUs (admittedly light) is that they are optmized for this exact type of thing. That is, you feed them a big set of surface data, and with some clever magic, they convert that data directly to viewable, digital, bitmap-like images. At incredible frame-rates.

I’m kind of new to actually coding OpenGL, though I’ve been reading as much as I can. I have the Red and Blue books, I’ve gone through many of the tutorials at NeHe, but I’m still not sure if I’m on the right path. In many ways, I think what I’m trying to do is much simpler than what the average person using or wanting to learn OpenGL is interested in, as I’m not looking for complex texture rendering, games, and the like.

From what I know, what I think I’m envisioning is something like this. Based on the color plates from the red-book, I think I want to do the render in flat shading mode, no lights, no edges. I want the background to be black, the normal surface to be pure blue, and the anti-normal surface to be pure red (of each triangle). I want the perspective to be orthogonal (not GL_PROJECTION).

What I’m hoping to do, and thus most unsure about, is to create a method of slicing the files using only OpenGL. My idea is that I would position the camera directly over the object, looking straight down. I’m not sure where to set the height, or the center of the camera. I load all the triangles for the part, and scale them so they fit the image. I’m not sure if I can do that. I need to make sure the size of the output bitmap is scaled to the part and the size of the drop. The one advantage of WinGDI is that the device independant bitmap lets me scale it however I want. It’s not clear to me that I can do that with OpenGL. It would instruct OpenGL to render the scene such that only layers behind my current build or slicing height are displayed. My thought is that this should show up in the bitmaps with three basic colors. Black for the background. Blue for parts of the file that have already been printed. Red for the interior of the current cross-section. Since I know it’s water tight, there should always be a red final bit.

I’m not asking for a solution of how to do this. Like I say, I’m learning OpenGL solely for this purpose, not for a hobby, so I need to know if it’s worth continuing down this path. From what I think I’ve read, it seems to me I should be able to do it, I just don’t know. Any advice is appreciated, and I thank you in advance.

DTB

Some comments:

  1. Make sure you understand the change from the legacy immediate mode, and the new shader based version. The ability to tailor the shader program is powerful, and it is more powerful when you have large amount of data.
  2. It may be that OpenCL is what you want.

I think the biggest area where OpenGL would fail for this is in terms of resolution. When doing a render-to-texture setup (which is what it sounds like you need, rather than direct rendering to the screen) you’ll be limited to the max texture size supported by your user’s hardware - which may be as low as 2048x2048 or 4096x4096. That might be large enough, but then again - it might not.

You’ll also be limited by available video memory but it’s probably not going to be as significant a factor.

I don’t think it is a problem, since he can use multiple textures.
The more serious problem is how to slice model. I have no idea how it can be done efficiently using OpenGL.

For the slicing, I naively think that simply using one clip plane on top and different red for inside and blue for outside will be enough.
However it means the model faces are already correctly oriented, it sounds like it is not the case.

I don’t understand how the coloring should be done. The clipping will do its job, but it would create a hollow slice. Some vertical edges would be totally invisible. I cannot realize any simple solution.

If you have only one clip plane it will not be a hollow slice.

Warning ascii art, example for a sphere :


side view :

    * camera looking down

   ____
--/----\----- clip plane
 /      \
 \      /
  \____/


view from camera :
B=blue
   ____
  / __ \
 /B/  \B\
|B|RED |B|
 \B\__/B/
  \____/  


Wow, I’m so glad to see people already responding, and thanks so much. I’d like to sharpen up some of my descriptions based on your feedback and try to find the next pit to fall in to. I’ll start from the top, I suppose.

Kopelrativ, about shading, I’m not to have any. I really want the final output to be as ‘flat’ as possible, I think, so that the bitmap deinterlace routine can simply look at the R,B pixel values to determine ink-drop or no. I don’t even want triangle edges or shadows, so what I really need to do is make sure I don’t have them. I know nothing of OpenCL, so I looked at the wiki page for it, and all I can say is I hope you’re wrong. :wink: That looks even harder to learn. This dog’s brain is maybe a few years too old to learn that new trick.

Mhagain, that’s my biggest fear: resolution and accurate scaling. In GDI it’s simple, I can tell the bitmap drawing routines what I want my user-units to be, and how many pixels I exactly want. But you are right - I don’t necessarily want to render to the screen, just a buffer that I can query after the fact. And I don’t want a view frustum; I just want a ‘square view’, with no vanishing points. In Solidworks I would say ‘NOT Perspective’, but in OpenGL I’m not sure what you call it.

And Texture must mean more than what I think it does. You mention it, Kopelrativ does, as does Aleksandar. I’m going to have to read more about it, because I keep thinking I want no textures, just flat colors. Maybe I can’t have that.

And you’re spot-on with the size concern too, as I have the same one! I suppose, I could draw small bits at a time and collect up the resulting bitmaps manually, and indeed I might have to. But that brings up the question, can I do that with accuracy? Can I place the part or parts in the viewing are such that I can move the camera around (or the objects), take snapshots, and stitch it back together? In a typical application, you have objects around 50mm in size being printed with 50um pixels. Maybe 1000x1000 pixels. An actual build-volume can be hundreds of millimeters, so at what point will it stop working? Probably a tough question, and like you say, I suppose it could be hardware dependant. Another separate question- can I read the bitmap data back directly? I’m not clear on that.

Aleksander, again, I have to learn more about textures, clearly. As for the actual slicing, I’m hoping that ZBuffeR is suggesting the right track. You see (and now I’m grouping those three posts), I think what I actually want IS the hollow slice. I don’t want to actually know (what I would call) the contour data. I just want the pixels. I don’t think I want edges, but I’m assuming that I would still ‘see’ the borders between the black background, the blue exterior, and the red interior.

To grab that last statement, ZbuffeR, again I think you’re right. The STL file is generally in all-positive space, going from 0->MaxX, 0->MaxY, 0->MaxZ. My impression is that OpenGL is not this way. Almost everything seems (SEEMS, I’m new) to take parameters between -1 and 1. Will I need to move the model data around to center it? Do I do that before hand in my data class, or do I do that through transformations in OpenGL? (I’m not expecting an answer, these are just some of the questions I’m asking myself about this)

So Aleksander to answer your last question about how should the coloring be done, I’l try answering with an example. Say I have a square-based pyramid with vertices of (-1,-1,-1),(1,-1,-1), (1,1,-1),(1,-1,-1), and (0,0,1). Say it’s time to build the cross-section at layer height z=0. I would set (I think) my zFar to be -1, and my zNear to be 0. Also, my camera is right on the Z-Axis, looking straight down at the object. I would hope the output would be, say, a big blue region, essentially a blue square with its center removed. And in the center of this blue square I would expect to see a smaller red square, and that red region represents the area I need to print for this particular layer. At least, that’s how I’m envisioning it working. (Kind of like Figure 1-1 in the Red Book, but with blue and red. And I know, in that example they actually draw the square. For mine, it would be more like an artifact.)

Thanks again! - DTB

Zbuffer, ee must have been typing at the same time. That is exactly what I mean, thank you for the example.

Texture : it is simply a 2 dimensional array of “texels”. A texel could be a single value, or 2, or 3 (ex RGB) or 4 (RGBA). This array is easily transferable from and to the OpenGL hardware.

So when saying “render to texture” it means the actual result of rendering will not need to be visible on screen, but will be in an easy-to-get array. On screen is more fragile (limited to actual screen size for example). Query your hardware for max texture size, then render in blocks of say 2K by 2K.

You will not need it but a texture can also be applied to a 3D object and sampled in very creative ways when using shaders.

A no-perspective projection is called ortho or orthographic in OpenGL. You can also easily specify the glOrtho projection to view your 0->maxX coordinates directly :
http://www.opengl.org/sdk/docs/man/xhtml/glOrtho.xml
void glOrtho(0, maxX, 0, maxY, 0, maxZ);
You may have to tweak the z parts, left hand to right hand or something.

Great stuff, thanks again! I think I found my project for this dreary weekend (where I am, anyway). I should probably budget more than a weekend to learn eclipse, java, applets, opengl, and lwjgl, but I have to start sometime! I think I’m going to have more fun with programming and stretching the old brain than I’ve had in a long time…

Hi,

i would suggest the following algorithm (do it multiple times if the desired resolution is larger than the framebuffer and stitch):

  • input a water-tight mesh, output a set of images for each slice with: black for all outside voxels, white for all inside voxels (which need glue)

  • (assuming the slices are along the z axis) for each slice:
    ** clear color is black -> clear framebuffer
    ** calculate the z-min and z-max for the slice
    ** render the mesh othographic
    ** send the worldspace z-coordinate from the vertex to the fragment shader as varying
    ** upload the z-min, z-max as uniforms to the fragment shader
    ** use the fragment shader pseudocode below:

// camera
// (large Z values)
// ---------
// |…| <- maxZ
// |…| <- minZ
// --------- <- mesh
// (small Z values)

in float worldZ;
uniform float minZ;
uniform float maxZ;
out vec4 color;

void main() {
if (worldZ > maxZ) discard; // anything between the cam and the ‘upper’ clipping distance
if (worldZ > minZ) {color = vec4(1.0);} // anything in this slice is white
else {
// everything below the slice:
if (gl_FrontFacing) {
color = vec4(0.0);
} else {
color = vec4(1.0);
}
}

}

The last part of the shader will shade the ‘inside’ of the model that is visible (as the mesh gets cut open due to the discards above the clip plane maxZ) also in white but will overdraw that with black iff the outside is visible:

camera

.|…|
.|…|
–…--

BBWWWWWBB <- output image

The above clipping can also be done with the OpenGL near clipping plane.

PS: The ASCII arts are corrupted as the forum software ate my spaces, hope you can get the idea nevertheless, dops are now added instead of spaces.

Use

 tags to preserve spaces.

Let me start by thanking everyone for their input to now. So I’ve learned a lot. I decided to make the first step in learing how to use OpenGL and Java to be creating a custom file-select dialog with part preview. Though it took longer than I might have otherwise hoped, I’m satisfied for now.

I still don’t know how to do off-screen work, but that’s OK. I’ll save that for later. I would like to be able to do slicing previews on the screen first anyway, so the machine operator can see what the machine will be doing, and she can then make some adjustments to the operating parameters for different build regions.

And that’s where the trouble starts. I’ve figured out how to load and position everything for viewing, and I can use glOrtho to cut open the parts. My current problem is that the inside and outside are the same color! Well, not exactly the same. The inside is a darker shade of blue, but it isn’t red. I have code like this:


float[] mat_spec_blue = { 0.0f, 0.0f, 1.0f, 1.0f };
float[] mat_spec_red = {1.0f, 0.0f, 0.0f, 1.0f };
float[] mat_spec_white = {1.0f, 1.0f, 1.0f, 1.0f };
float[] light_pos = { 0.0f, 0.0f, 0.0f, 0.0f };
float[] mat_shine = {50.0f, 0.0f, 0.0f, 0.0f };

GL11.glMaterial(GL11.GL_FRONT, GL11.GL_AMBIENT_AND_DIFFUSE, (FloatBuffer) temp.asFloatBuffer().put(mat_spec_blue).flip());
GL11.glMaterial(GL11.GL_FRONT, GL11.GL_SPECULAR, (FloatBuffer)temp.asFloatBuffer().put(mat_spec_white).flip());
GL11.glMaterial(GL11.GL_FRONT, GL11.GL_SHININESS, (FloatBuffer)temp.asFloatBuffer().put(mat_shine).flip());
		
GL11.glMaterial(GL11.GL_BACK, GL11.GL_AMBIENT_AND_DIFFUSE, (FloatBuffer)temp.asFloatBuffer().put(mat_spec_red).flip());
GL11.glMaterial(GL11.GL_BACK, GL11.GL_SPECULAR, (FloatBuffer)temp.asFloatBuffer().put(mat_spec_white).flip());
GL11.glMaterial(GL11.GL_BACK, GL11.GL_SHININESS, (FloatBuffer)temp.asFloatBuffer().put(mat_shine).flip());

light_pos[0] = previewingFile.getLengthX()/2.0f;
light_pos[1] = previewingFile.getWidthY()/2.0f;
light_pos[2] = previewingFile.getMaxZ() + 1;
				
GL11.glLight(GL11.GL_LIGHT0, GL11.GL_POSITION, (FloatBuffer)temp.asFloatBuffer().put(light_pos).flip());
GL11.glEnable(GL11.GL_LIGHTING);
GL11.glEnable(GL11.GL_LIGHT0);
GL11.glEnable(GL11.GL_DEPTH_TEST);
				

I’m sorry this is all java and the declarations look funny. Now, when I use glOrtho to scroll through the parts, I don’t get any red. Any suggestions? Do I just not have all the right things enabled? I tried turning on culling, and that made it much worse. The backfaces all disappeared! Special note: I don’t use the glColor* commands at all. Should I be?

Thanks again,
Dan B.

I’m sorry, please disregard. I saw a post that mentioned ‘GL_LIGHT_MODEL_TWO_SIDE’, I enabled that as the light model, and now I get what I expect. Sorry I missed that before. I’ll be back when I’m ready to start asking questions about off-screen ops! But as always, thanks again!
Dan B.