I am writing a program to read in an input file and display the terrain from the data specified in the input file.

The input file contains 28,000 vertices. Each vertex makes up of x, y and z coordinates. Each vertex specifies a point of a 3D block (similar to a Lego block), i.e. there are 28,000 blocks. A 3D block should have 8 vertices, but the input file only specify one of the 8 vertices of each block. So, there are 28,000 blocks. The input file specify the length, breadth and height of each block too. We can generate the other 7 vertices of each block in our programming by making use of the length, breadth and height if we need to use the other 7 vertices information.

It is required to write a program to render the image terrain as specified by the data in the input file. The input file specifies the data of the terrain in blocks, the easy way is to render the image as blocks and the terrain will look as it is built using Lego blocks. The rule of the program is, we cannot write a program which just render the terrain image as blocks, we must think of a way to make use of the data in the input file and render the terrain using NURBS. In OpenGL Optimizer, there is the opNurbCurve2d, opNurbCurve3d and opNurbSurface.

If can provide some information on how to determine the knot points and array points from the vertices given in the input file, many grateful thanks in advance.


“The NURBS book” is a good reference about aproximating an array of points whith NURBS surface. If you have trouble feel free to mail.

Could you please tell me where I can find more info on OGL Optimizer?

[This message has been edited by walden (edited 12-23-2000).]

go to the sgi site…