Blending problem

Hi,
I’m making a little UI where the user can open an image and then draw over it by clicking and dragging. When the image is opened it’s displayed in the background (it’s made into a texture and applied to a quad). A (not very accurate) depth map is available for the image and the aim is to make the brush strokes (created by the user ‘drawing’) appear like the follow the shape of the object. In essence, open a picture and a paint a new material on the depicted object.

In this case I’m trying to simulate fur… Each hair is drawn as 3 line segments and there’s one hair per pixel covered by the user’s brush stroke. Each hair starts at 0.6 alpha and ends with 0 alpha. Also, a bit of randomness is involved when generating each hair to make it looks more realistic as real fur/hair isn’t perfect.

Currently I’m initialising with

    glClearColor(0.0f,0.0f,0.0f,0.0f);
glClearDepth(1.0f);	// Depth Buffer Setup
glEnable(GL_DEPTH_TEST); // Enables Depth Testing
    glDepthFunc(GL_ALWAYS); // The Type Of Depth Testing To Do

glShadeModel(GL_SMOOTH);

glViewport(0,0,m_w,m_h);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glLineWidth(1.5); 
glMatrixMode(GL_PROJECTION);	
    glLoadIdentity();
glOrtho(-1,1,1,-1,1.0,10.0);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
glEnable(GL_POLYGON_SMOOTH);
glEnable(GL_LINE_SMOOTH);

glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);

However, as the depth test is set to always, new hairs cover large parts of the previous ones (apart from the root) so in the middle bits of each brush stroke only the root bits are visible so it just looks like random pixels. The effect I’m trying to achieve is the way it looks at the end of each brush stroke - ie furry and fluffy :slight_smile:

If I set the depth test to LEQUAL it goes all horrible because (from what I understand at least) even if things under the transparent parts of the hair should be visible they’re not rendered because they fail the depth test.

The only possible ways I can think of so far are rendering just the fur in a separate buffer and blending the new with the old so that the alpha of the new is 1-alpha_old or something along these lines so that new hairs overlapping with the old are covered by the old.

Or keep a tree with all the hairs sorted by depth in memory and when a new one is drawn, redraw the area it affects in the correct order (based on depth) which sounds sensible but in most cases line segments of the old and the new hair would intersect so I’m not sure how I should handle that.

Any suggestion on how I could start with either of the above approaches or any other possible solutions/hacks would be greatly appreciated… The end result doesn’t have to be accurate as long as it looks right…

Apologies for the long post

Thanks
Tania

Perhaps have a look at GPU Gems 2 article: Hair Animation and Rendering in the Nalu Demo

http://developer.nvidia.com/object/gpu_gems_2_home.html

I don’t have the article on hand, but I believe they use alpha-to-coverage multisample trick to render the hairs.

(If you don’t know what this is see this Humus demo:
http://www.humus.ca/index.php?page=3D&ID=61 )

Thanks, I’m trying to make sense out of that… I’m new to opengl so not sure where to start…
But just to clarify, if I had some sort of offscreen buffer that starts as transparent, would it not be possible to render all the fur there, blend it properly (ie if something is already visible in a pixel in the buffer then reduce the alpha of the incoming pixel by the alpha of the existing pixel or something like that) and then render that as a texture or something on top of the background image?