Nurbs vs glMap

Im trying to draw some custom surfaces whose control points are intractively modifiable. My problem is that Im not comfortable with the KNOTs of Nurbs surfaces. glMap functions enable me to use the WEIGHT (w) of a control point to pull the surface more comfortably than KNOTs of Nurbs.

But Nurbs surface is smooth and fast compared to MY glMap usage. Is there anyway to convert my glMap control points (with weights) to Nurbs control points and knots?


It may not look appropriate to reply to my own questions. Anyway, as I continued my testing, I found that using glMap produced better surfaces. Nurbs seems to be using a different algorithm to tesselate the polygons. Nrubs produces fewer polygons and increases or decreases the no of plygons appropriately. If we use glMap it is left to us to determine thye subdivisions. Thats all for now.


The answer to your question is to use neither.

gluNurbs takes the nurbs surface data and splits it into bezier patches. It is those that are then drawn. The problem with all of this, is that it has to re-calculate all bending factors for each frame. Very very slow. glMap is used by gluNurbs to draw the bezier patches.

In my experiance there are a number of things you need to do to get usable NURBS, first of all, use the Cox-de-boor algorithm to work out your blending factors. There are only two sets of blending factors, one for the u direction and one for the v direction. All points on the tessellated surface will use those blending factors, and they only change when the level of detail changes. This is the slowest part of NURBS calculation, so if you store those values in an array, you avoid most of the calculation and you framerate goes through the roof.

The other thing to think about is the niceness of NURBS tesselated surfaces. All of your points line up nicely for vertex arrays, and the triangles nicely order themselves into triangle strips for you.

Ordering your data nicely in this way gives you only two occasions that you have to re-compute the surface :

  1. A Control Point is moved, this requires only the vertices and normals to be re-computed
  2. The surface LOD changes. This requires everything to be re-computed and memory allocations for the vertex array and blending data.

Extra funky stuff can be done with extensions, namely,
Register Combiners : calculate the normals on only 16 surface points (4x4) and encode them into a texture. Use linear blending on it and you’ll have per pixel lighting effects.

Vertex Shaders : you can just about get a bezier patch to tessellate in vertex shaders.

There are loads of other optimisations you can do, especially if you write an algorithm to split your nurbs into beziers. As an example of speed, I’ve got a fully skinned NURBS character that animates at about 300fps when tessellating to 1500 polys, that drops to about 50fps for 20,000 polys. (PII 400, Geforce 256, 256Mb).

ps. gluNurbs does allow you to specify a fixed tesselation level. There are 3 tessalation modes, can’t remember the name of the useful one, but the variable tessallation ones are GLU_PATH_LENGTH,GLU_PARAMETRIC_ERROR.

This only suggests to me that I need to a lot more reading before trying any of the points you mentioned. Thank you