# Model matrix transformation and a work algorithm in the scene. Organization data.

Hi there!
I work on my little CAD 3d editor and has a problem how to organize any transformation in the scene with a future work of algorithms: boolean, mouse picker, intersection and other. I have a gizmo wich moves any objects in the scene. As I already know, is the usual way of storing any local transformations is store that transformation in the model matrices of this objects and execute any local transformation directly in the shader. BUT, for example, in my program I implement a classic ray-picking algorythm: the ray is in the world space and detect any intersection with the real (transformed in world space, locally) vertex positions. For example, I have a box and a sphere and I move it by gizmo (I edit their model matrix) on 1,0,0 and 0,1,0 respectively. His model matrices is now different. HERE I get data that I need for ray-picking ant another algorithms - ever objects has own local individual places.

My question is how to interact with the objects, wich the real place is unknown until it’s will locally transformed into the shaders? What is the usual way to store data for CAD or 3d editor program, where a real position of object is a base of work of any algorithm?

I gathered some ideas:

1. Multiply any local transformation immediatly on CPU and store already transformed data. I think it’s a clear way but it’s expensive: each frame I convert the delta of moving to matrices and multiply the whole data by it on the CPU.

2. Produce any transformation by directly changing data. This is a not multiplication of matrices and much easier. But how to implement rotation? Hmm…But this is another story. ))

3. Store any transformation in the model matrices until the mouse picking will starts and then quick multiply verticies by matrix on CPU to prepare data for picking. This is very fancy but there is ways to optimize it.

4. Store the inversed version of model matrices of each object and multiply ray by this matrices when picking is run.This seems to me is helpful only for the picking algorithm.

5. Organize the picking and other algorithms in the shaders or by CUDA, OpenCL.

Which is the more usual way to work in CAD programs? Wich method is more convenient and easy for a future work of boolean algorithm, for example? Or may be I’m not on the right track and there is some other method?
Thank you!

If you do a CAD program, then you’ll certainly have to move, rotate, stretch, remove any polygons of a mesh/model. You’ll also be able to bend a full model for example.
So I would do all these transformations on the CPU.

You could also consider another approach to picking, which will be pixel precise, which can be called color picking. Here is a tutorial about it. Advantages over ray-traced picking is that it does not require any mathematics, it does not need to know the geometry (neither transformed or not transformed), it is precise at the pixel level. Disadvantages are that it might be slow (but generally it performs faster than having to cast rays over each polygons of many models), it requires a modification of your rendering workflow, and you have to store one extra information for each of the polygons (a unique color).

Note that if you’re using a fairly recent version of desktop OpenGL, there’s no reason to use colours. You can use a 16-bit or 32-bit integer (GL_R16UI or GL_R32UI) framebuffer to store object IDs directly.

Also, if you want to perform picking on the CPU, you can still use the GPU to perform transformations, using transform-feedback mode (glBeginTransformFeedback etc) to capture the transformed vertex positions. But note that unlike legacy feedback mode (glRenderMode), this is done prior to clipping.