Hello all, I’m going to start with an apology. I suspect that this may be a long post, and it may take a few minutes to read through to the end. But when you do reach the end, rest assured, your response can be as short as, “Yes, OpenGL can do that”, or “No, I’m sorry, OpenGL won’t do what you describe”. (If you’d like to expand on those example answers, that’d be great.)
So I’m at a point in a software development project where I must decide whether to stick with what I know, or try to solve a particular problem in a new way. Basically what I’m doing is writing a printer driver for a 3D printer. My ultimate output needs to be a series of bitmaps that tell where the printer should add some ‘ink’ on a layer by layer basis to build up the solid model.
In the past, I’ve done this with a comibination of third party software and the Windows GUI+ functions. The user data is normally stored in an STL triangle tesselation. STL is an extremely simple format, and no adjacency data is stored in the file; the only thing you’re guaranteed is that the model is water-tight. The file is loaded into the 3rd party product. This slices the triangle data, using planes projected at defined intervals. The locations where the plane intersects the triangles are collected in a 2.5D file, SLC or CLI.
A DIB is created and scaled so that the size of the pixel represents the size of one drop of ink, and there are enough pixels (voxels to me) to hold the size of the object. The 2.5D contours are read in, and the shapes are drawn as closed polygons onto the bitmaps, in either black or white as defined by how they’re wound.
The bits are deinterlaced and sent to the printhead, where the ink causes a substrate to bind together in layers. That’s the basics anyway, and the particulars of the file formats and loading mechanisms are moot for my question. I’ve already written loaders for all of the file types, so that’s not a problem.
The problems are two-fold. First, the files I see are getting more complex. As people become more familiar with the technology, they push it to do more things. Computer artists are going nuts. Some files contain over 10,000,000 triangles. The second is the time slicing takes. No matter how else you do it, you either have to sort a bunch of triangles, or order a bunch of triangles, or determine all kinds of adjacency data. It just takes a long time for the CPU.
My understanding of GPUs (admittedly light) is that they are optmized for this exact type of thing. That is, you feed them a big set of surface data, and with some clever magic, they convert that data directly to viewable, digital, bitmap-like images. At incredible frame-rates.
I’m kind of new to actually coding OpenGL, though I’ve been reading as much as I can. I have the Red and Blue books, I’ve gone through many of the tutorials at NeHe, but I’m still not sure if I’m on the right path. In many ways, I think what I’m trying to do is much simpler than what the average person using or wanting to learn OpenGL is interested in, as I’m not looking for complex texture rendering, games, and the like.
From what I know, what I think I’m envisioning is something like this. Based on the color plates from the red-book, I think I want to do the render in flat shading mode, no lights, no edges. I want the background to be black, the normal surface to be pure blue, and the anti-normal surface to be pure red (of each triangle). I want the perspective to be orthogonal (not GL_PROJECTION).
What I’m hoping to do, and thus most unsure about, is to create a method of slicing the files using only OpenGL. My idea is that I would position the camera directly over the object, looking straight down. I’m not sure where to set the height, or the center of the camera. I load all the triangles for the part, and scale them so they fit the image. I’m not sure if I can do that. I need to make sure the size of the output bitmap is scaled to the part and the size of the drop. The one advantage of WinGDI is that the device independant bitmap lets me scale it however I want. It’s not clear to me that I can do that with OpenGL. It would instruct OpenGL to render the scene such that only layers behind my current build or slicing height are displayed. My thought is that this should show up in the bitmaps with three basic colors. Black for the background. Blue for parts of the file that have already been printed. Red for the interior of the current cross-section. Since I know it’s water tight, there should always be a red final bit.
I’m not asking for a solution of how to do this. Like I say, I’m learning OpenGL solely for this purpose, not for a hobby, so I need to know if it’s worth continuing down this path. From what I think I’ve read, it seems to me I should be able to do it, I just don’t know. Any advice is appreciated, and I thank you in advance.