How to apply modelview matrix to array of vertexs

In my app, I have a shape that is a rectangular height map (a 3D projection of a 2D fractal image)

The user interface allows the user to zoom, pan, and rotate the shape.

I need to be able to calculate the closest and furthest Z value for my shape after it has been panned/zoomed/rotated (in other words, after applying the model view matrix.) The value I calculate doesn’t have to be exact, but should be close.

I would prefer not to have to render the entire shape to calculate this value, since the shape can have millions of vertexes. (My app supports height maps of 4000x4000 vertexes, and a typical hight map has 800x800 vertexes)

I’m thinking of cutting my height map into a grid of rectangles (say 20x20 rectangles), and finding the maximum and minimum Z value for each rectangle. I would then build a grid of bounding cubes that would enclose my shape. I’d store the vertexes of these bounding cubes in an array. (preferably as a vertex buffer object stored in video memory.)

When I need to calculate my min and max Z values, I would:

-Set up my model view matrix for the current display settings
-Create a copy of my array of bounding cube vertexes (probably a VBO)
-Apply the model view matrix to my array of vertexes
-Find the largest and smallest Z value in the resulting array of vertexes.

Some questions:

-How do I create a copy of a VBO without transferring it back and forth to main memory?
-How do I multiply the vertexes in my VBO by the model view matrix and store the results back to the video memory, using the GPU, not the CPU?
-How do I find the largest and smallest values in a VBO, again using the GPU and without transferring the data back to main memory?

I’m guessing that the answer to all 3 questions is to write a shader program. However I’ve never tackled shader programming. This seems like it would involve a pretty steep learning curve, all to calculate 2 Z values.

Is there a simpler way?

In my opinion you have 2 options : CUDA and shaders.

I think that CUDA is the easiest way since you’ll have to learn something in both cases, but your end-users will have to get CUDA-class hardware ( = expensive ).

To my mind the best solution would be to render your scene in a depth FBO , which would thus contain ( somewhere ) the max and the min of the transformed vertices. You could then iteratively scale the texture on the GPU down to a 2x2 one, download it on the CPU and get your values.

However :
“This seems like it would involve a pretty steep learning curve, all to calculate 2 Z values.”
Sure :stuck_out_tongue: But as far as I can tell, there is no better way …

Bounding boxes are a good idea in my opinion, but why you don’t just build one big bounding box that encloses all the rectangular height map. For now, I don’t see any problem doing that.
What you want to do is using opengl to perform all transformations and retrieve the result. I think you should program your own transformation class (or something else, don’t know what language your are used to). Computing your own rotation, translation and scale is not difficult and would be fast enough to apply these to few vertices for your bonding box.


A single bounding box would be too inaccurate. The shapes I’m plotting somtimes jump to a peak in one corner, and not in one of the other corners. Estimates based on a single bounding box could be off by as much as 50% of the length of one side. I need estimates that are as accurate as possible (say within 10% of the actual max and min height values.

How do I perform transformations and read the result in OpenGL? The calls I can can find that apply tansformations do those transformations as part of the rendering pipeline, and then throw away the transformed vertex data at the end, leaving only the rendered color and depth buffers.

I agree that computing my own rotation, translation and scale is not DIFFUCULT. It’s a simple matter of building a transformation matrix and then doing a matrix multiply against my vertexes. However, I’m thinking that a 20x20 grid of bounding boxes would be about right for the accuracy I need. That would be 400 vertexes for the top and the bottom of my rectangles, or 800 vertexes total. Doing 800 matrix multiplies is certainly going to be much slower on the CPU than it would be on the GPU. Maybe I’m over-optimizing, and should just do the calculations in main memory on the CPU. I’m not sure. I’m hoping for some guidance from people who have tackled this sort of problem on the solution that offers the best balance of speed, accuracy, and effort on my part. I’m weak at best in linear algebra, and only moderately experienced with OpenGL, so coding up one of these approaches to benchmark it is a major effort for me.

Much as I’d love a chance to sink my teeth into CUDA development, that’s not an option for this application. My target audience includes any Macintosh that can run OS 10.4, which includes G4 computers, and fairly low end Intel, NVIDIA and Radeon video hardware, especially on most laptops. Tying myself to a high end subset of one vendor’s video hardware is not an option for this project.

I’m not clear how to use shaders to give me intermediate results like I need here, and then continue with the standard OpenGL rendering pipeline.

I’m also not sure how to get my vertex data back and forth from my “regular” OpenGL VBO into a shader, manipulate the data, and then hand the results back to my application.

Can somebody recommend a good introductory book on shader development that covers that sort of topic? I have both the “Red book” and the OpenGL SuperBible, and don’t see the info I need in either text. Is “The OpenGL Shading Language” companion to the red book the one to get?

I have heard about rendering to VBO. I have never experiment this myself but I have found an interesting topic about this:

As you see, it is not widely supported. I am not sure that would be faster than compute about a thousand vertices transformation. IMO, it is not a big deal looking out current CPUs…

Perhaps you’re right. Building my array of vertexes in main memory and multiplying it by the modelview matrix would certainly be easier than trying to code a vertex shader to do the work.

Oh, yeah, I got it.
This solution may or may not be an option, depending on how much your heigthmap will be varying, but if it’s fixed, it will work ( only needs a quite slow pre-pass )
You shoud compute the convex hull of your terrain. QHull can do that for you. Then, instead of transforming every single vertex, you’ll just have to transform the convex hull’s ones. It the hull has not to be recomputed often, this may be a solution.

Another solution I’m just thinking of is to split your terrain in squares of 50*50 vertices ( as you said ) and keep a bounding box of each ( as dletozeun said ). Transform you bounding boxes, find the furthest, and you’ll get a rough approximation of the anwer.

Perhaps you’re right. Building my array of vertexes in main memory and multiplying it by the modelview matrix would certainly be easier than trying to code a vertex shader to do the work. [/QUOTE]

I got this working. It was a fair amount of work. I essentially create a 20x20 array of bounding boxes for the points in my shape.

To do that, I cut my height map into 20x20 pieces. I allocate two arrays of 20x20x3 floats (one for max heights and one for min heights. I set up the x & y values of the vertexes to the x/y boundary of each rectangular piece. I then scan through all the z values for the points in that piece, and save the max value to my max_heights array and the minimum value to my min_heights array. I have to have some special-case code to deal with the last row and column in my array of bounding boxes because my height map is usually not an even multiple of 20 x 20. The last row and column of bounding boxes is usually wider than all the rest by plot_width % 20 and taller than the rest by plot_height % 20.

When I want to figure out the maximum and minimum z values (closest and furthest points in my plot) for a given rotation, shift, and zoom, I use openGL calls to set up my modelview matrix as if I was getting ready to draw, then us a glGet to read the modelview matrix into main memory. then I step through my array of max_heights and min_heights and multiply each vertex by the modelview matrix. Actually, since I only care about the resulting z value, I only calculate the z value of each point. This cuts down on the amount of math I have to do. Here’s what the code looks like:

-(void) get_min_z: (GLfloat*) min_z
			max_z: (GLfloat*) max_z
	GLfloat the_modelview[16];
	//GLfloat transformed_vertex[3];
	//GLfloat* theVertexPtr;
	GLfloat result_z;
	int index;
	[(BasicOpenGLView*) theOwningView updateModelView];  // update model view matrix for object
	glGetFloatv(GL_MODELVIEW_MATRIX, the_modelview);	//Read the current modelview matrix into the array the_modelview
	*min_z = 1e100;
	*max_z = -1e100;
	for (index=0;index < NUM_RECT_STEPS*NUM_RECT_STEPS; index++)
		//Find the z value of the current point from the min_heights array.
		result_z = ABS(the_modelview[2] * min_heights[index*3+0] + the_modelview[6] * min_heights[index*3+1] + the_modelview[10] * min_heights[index*3+2] + the_modelview[14]);
		if (result_z < *min_z) *min_z = result_z;
		if (result_z > *max_z) *max_z = result_z;
		//Now find the z value from the max_heights array.
		result_z = ABS(the_modelview[2] * max_heights[index*3+0] + the_modelview[6] * max_heights[index*3+1] + the_modelview[10] * max_heights[index*3+2] + the_modelview[14]);
		if (result_z < *min_z) *min_z = result_z;
		if (result_z > *max_z) *max_z = result_z;


I actually cheat a little and don’t calculate every vertex of every bounding box. I store the maximum and minimum height value for each bounding box in the smallest x,y value for the top and bottom of each bounding box. That way I can calculate 2 vertexes for each bounding box instead of 8. The results aren’t quite as accurate as if I calculated all the vertexes of each bounding box, but I only have to transform 1/4 as many vertexes, so the reduced accuracy is worth it.