Okay, so I’m not terribly experienced with openGL, but I’ve done some work. However, I have been reading articles on 3D graphics principles, etc. I’ve been planning to implement an idea in a sort of test run, but I was going to run the idea by the forum first. Here goes, please try to bear with me…
The most common (and hardware-friendly) method for cel-shading (cartoon-like shading/coloring) is to take E dot N and L dot N computations for each vertex, then assigning texture coordinates based on them. First off, L dot N is indexed into a 1D texture with 3 color shades for highlighted, medium, and shaded. A very low L dot N highlighted, and L dot N >90 is shaded. All else is medium.
Then a second pass is rendered, where a black and transparent 1D texture is referenced. For all vertices such that E dot N is near 90, the texture coordinates are on the black end. All other vertices are rendered clear. This produces an edge outline on top of a 3-color model.
Okay, now my idea-- wouldn’t it be considerably faster to simply make a 2D texture that consists of the color texture (oriented horizontally) stretched vertically, with a bar of black on top? That way, in the y direction (for example) in the texture, one either references colors or black, and in the x direction, the colors vary. Then, simply generate the x texture coordinate via the L dot N calculation and the y coordinate via the E dot N calculation. Would this not produce a similar effect with only one pass? I’d appreciate it if you (the reader who managed to fight his way to the end of this long-winded explanation) could give me any insight you might have. Thanks a lot.
[This message has been edited by PHoRD (edited 04-26-2001).]
Sounds like you’ve solved it. I can’t see any reason why it wouldn’t work as you’ve suggested.
How are you going to make sure that the border lines have a constant width, say 1 or 2 pixels. Could this be done by calculating the width of the span and then forcing the balck texture coordinate to be slightly greater than 0 so that the border has fixed width?
The border lines won’t actually have a fixed width. They should come out with thicker lines where the model edge stays closer to tangent with the eye vector for longer. Thus if the model has interior curves that just barely touch the 90-degree threshold, it’ll just make a thin black line. If the model has a large, defined curve that marks the silhouette, it should be thicker. but due to the interpolation, the line widths should be more or less continuous, without any gaps or large jumps in width.
If you’re toon shading, it might actually look better (crisper) if you use GL_NEAREST instead of GL_LINEAR (or better
However, I’m not sure I would do contours using normal dependent textures. Instead, I’d probably want to walk the geometry to find the outline edges, and draw them with a fixed width line. Finding the edges is the same algorithm as the one used for generating shadow volumes; in short, you find edges where the triangle on one side has positive or zero normal dot light, and the other has negative ditto.
Now, if lines weren’t soo durn slow on consumer hardware…
your method ( PHoRD ) is the fastest to find… simply cause you can do it in one pass… and it looks nice… have done outline rendering with a simple cubemap and GL_NORMAL_EXT… looked great… ( did not planed this… i just wanted to get a cubemap to work )
i would choose GL_NV_VERTEX_PROGRAM to set the shading up… thats fairly easy…
but the outline stuff looks more “cartoon”… so my tip to do is to translating the vertex a bit along its normal and rendering it at this place with invese culling enabled and fully black in the first pass… then resetting everything and drawing the shading like before… thats the way a q3 model does it ( somewhere found one time… ) and it looks very cartoonic… no hard work for outline searching like that
Although walking through the geometry to find all silhoette edges might yield the best results, it probably isn’t going to be a good method for the next few years, where GPUs can process far more vertices than the CPU can… When you are drawing silhoette edges you are highlighting the outline of your model data more than usual, and therefore a large amount of the “jaggedness” that gourad shading and per-pixel whatever hide is still visible around the edges of your models. Therefore models to be outline often have to be smoother or at least have higher polygon densities than “realisticly” shaded models. For this reason it is not a good idea to have to perform per-frame searches through an entire model. Although there are coherence-based, and precalculation-based ways of speeding up the search, few things work well on highly dynamic or skinned models (where transforming vertices to get the eye-space position is a collosal waste of time) and even on static geometry most algorithms are only fast enough for “demo” purposes…
In the end for outlining it is better to stick with either an image-space algorithm that leaves your geometry alone (like this stuff with textures), or something you can get your hardware to do for you (like extending vertices along their normal on programmable GPUs).
The entire point was to develop an algorithm that 1) does it in one pass/texture unit and 2) does not require any special hardware (i.e. the NVidia extension davepermen suggested). To me, my algorithm seems like it should work fairly quickly, though I admit it probably wouldn’t create the most perfect outline. My guess is it would approximate NVidia’s toon shading demo, and that would be fine by me. And I agree with jwatte about the GL_NEAREST thing. I plan to experiment with that if I ever find time to implement this system. But first I need to come up with a model I can use, be it by model loaders, cut-and-paste, or whatever. Thanks to everybody for all the input though.