How to use texturing as base for these algorithms?

To start off, I want to say that while I have some reasonable openGL experience, I am completely incompetent when it comes to using textures, so please use an appropriate level of detail in any responses. :slight_smile:

The user will draw a closed curve on the screen (the curve may self-intersect, and the last point is set to equal the first in the list of discrete points drawn), and I want to generate a series of random points that are contained within this curve. The random point generation algorithm is already solved using some other tricks specifically related to my application. However, to do this, I operate in the space defined by the bounding box of the curve, and then I have to apply a second processing step to eliminate those points which are outside the curve.

Currently, I just perform the standard even/odd intersection test, but I do it naively (that is, for each random point, I test its intersection on a horizontal line versus each line segment on the discrete closed curve). When the closed curve has 200 segments and the number of random points gets 20,000 or higher, this obviously is too slow. I had thought about implementing a smarter algorithm for this (BSP-tree related), but perhaps I can use some sort of texturing algorithm to help here?

Is it possible for me to create a texture where everything outside the closed curve is black, and everything inside is transparent (within the curve’s bounding box)? Then I could simply render each point with a different color, apply the texture (which will wipe out everything outside the curve, but keep the color info of all points inside), and then use glReadPixels to pick out the non-black, non-white colors? I suppose this may not be all that much faster, because if the point density is high, two points may render to the same pixel and only the last one to be drawn will be recognized. (I suppose this could be solved by simply taking each random point and indexing it into the texture instead of needlessly rendering it and reading it back.) Nonetheless, is my notion of texturing correct here? Is this possible, and it is fast to create such a texture?

Additionally, I want to know the distance from each random point (on the viewing plane) to the closed curve. For this, I will be using a distance transform algorithm, but again, this requires a 2D image with 0 pixels (black) on the boundary of the closed curve (and outside), and 1 pixels (white) in the interior of the closed curve… which is basically what I’m generating by the texture above, so I could potentially speed up TWO algorithms.

Please let me know ways of implementing these ideas using openGL, or if I’m way off base. :slight_smile:

tl;dr - I would like to get an image (texture?) where all points outside a 2D closed curve (defined by a set of discrete points which define a polyline) are black and all points inside are white.

If the inside of your curve is a convex or concave polygon and if the contour of you poly is made of lines (no Bézier curve), you can use the stencil test. So you can write a value in the stencil buffer when inside (1) and zero outside. The technique to do this use triangle fan with the points of the contour of your curve.

See this thread for explanation of the technique.

How does that work if the polygon is concave though? In my case, it could be convex or concave, but if you pick a random point on your polyline and then draw a triangle fan to every other pair of points, you will include areas outside of your curve.

I think the thing I need to do is just render to a texture (which I should be able to find tutorials online to do). Then I can just query the texture’s color at various 2D coordinates to see if the points are inside-outside, plus I have my color map for the distance transform.

This would work great if the polygons were convex… but how is the triangle fan method supposed to work on concave polygons?

I tested the algorithm on both convex and concave polygon and it work. The goal of the algorithm is to be able to fill concave polygon with a color (see OpenGL red book 5th edition if you have one).

Best, you can choose a random point on the plane of the polygon for the first point of the triangle fan and the algorithm should work.

Then to check if you are inside or outside of the polygon, use glReadPixel for reading the stencil buffer.

Beware that the depth buffer and stencil buffer is often packed together so maybe some manipulation are required.

good luck.

Edit: After reading the stencil buffer you can copy the data in a texture.

Reedit: If your background if black and you fill your polygon with a white color, you can read the color buffer to see if you are inside or outside.

It also seems like I can just tessellate the object using the GLU routines to draw my self-intersecting, perhaps concave polygon. I can even just draw it to the back buffer and call glReadPixels to get whatever I need out.

I don’t think I am ever going to “render” this texture, because the density of the points is going to be problematic when trying to pick out which points are on screen. As such, all I really need is a rectangular image with color info (ie, a texture, but not one that will ever be drawn; I’ll only ever use it for 2D lookups).

As a result, it looks like I can just draw my complex polygon to the back buffer using the GLU tessellation routines, then pull it out using glReadPixels for all future lookups. Hopefully I’m not way off base. :slight_smile:

Okay, so after looking up how to do tessellation of a non-convex polygon using GLU (and fully understanding the C++ implementation), I’m having problems converting it to C#.

When I provide any polygon to tessellateClosedPolygon in my class below, it just outputs a black screen. What am I missing here? (I am using Tao’s implementation of openGL/GLU for C#)

public unsafe class Tessellator
        Glu.GLUtesselator tess;

        public Tessellator()
            tess = Glu.gluNewTess();
            Glu.gluTessProperty(tess, Glu.GLU_TESS_WINDING_RULE, Glu.GLU_TESS_WINDING_POSITIVE);

            // set up callback functions
            Glu.gluTessCallback(tess, Glu.GLU_TESS_BEGIN, new Glu.TessBeginCallback(Gl.glBegin));
            Glu.gluTessCallback(tess, Glu.GLU_TESS_END, new Glu.TessEndCallback(Gl.glEnd));
            Glu.gluTessCallback(tess, Glu.GLU_TESS_VERTEX, new Glu.TessVertexCallback(Gl.glVertex3dv)); // (vertexCallback));
            Glu.gluTessCallback(tess, Glu.GLU_TESS_COMBINE, new Glu.TessCombineCallback(combineCallback));
            Glu.gluTessCallback(tess, Glu.GLU_TESS_ERROR, new Glu.TessErrorCallback(errorCallback));

        private void vertexCallback(IntPtr vertexData)
            //double* vertex = (double*)vertexData.ToPointer();
            //Gl.glVertex3d(vertex[0], vertex[1], vertex[2]);


        // huh? what? when this was called, weight and coordinates were length 1 each!
        private void combineCallback(double[] coordinates, IntPtr[] vertexData, float[] weight, IntPtr[] outData)
            double* d = (double*)outData[0].ToPointer();

            for (int i = 0; i < 3; i++)
                d[i] = coordinates[i];

        private void errorCallback(int code)
            string err = Glu.gluErrorString(code);

        public void tessellateClosedPolygon(List<C3DPoint> curve)
            Gl.glClearColor(0, 0, 0, 1); // clear to black temporarily
            Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_DEPTH_BUFFER_BIT);
            Gl.glColor4f(1, 1, 1, 1);
            Glu.gluTessBeginPolygon(tess, (void*)null);
            double[] data = new double[3];
            for (int i = 0; i < curve.Count - 1; i++) // last point is duplicated as the first, don't need to include it

                data[0] = curve[i].X;
                data[1] = curve[i].Y;
                data[2] = curve[i].Z;
                Glu.gluTessVertex(tess, data, data);
            Gl.glClearColor(1, 1, 1, 1); // clear to white as normal


Found my problem; each data array in my gluTessVertex loop has to be its own pointer, not one shared pointer. I had assumed the implementation would make this a non-issue… once a vertex had been processed I thought I could reuse its structure. Clearly not the case! :slight_smile:

Still don’t understand the proper C# use of the combine callback function, but maybe now that things are working without intersecting lines, that function will start providing correct arguments, so I’ll tinker with that for a bit.