Supposing you have the world coordinates, you need to transform each position for view and projection. You can create a single matrix with the combined Project*View in advance. That means you will need one transformation for one corner of each square, and the other corners can be computed easily (assuming all squares have equal size and orientation on screen).
I would recommend the glm package for easy and quick matrix manipulations, and I don’t think a couple of 100 transformations should be a problem. If you are uncertain, make a simple test program just to measure time. Notice also that much of the drawing will be ingoing in parallel with your calculations.
So you will draw ~400 boxes on the screen, with information in each? Sounds quite crowded to me. If you are not showing all boxes at the same time, there there are algorithms that can help you minimize the list of necessary calculations. It also depends on what timing requirements you have on the drawing operation.
Ok, this was how I was thinking to do.
Compute a projection matrix and project every vertex on it. Once I have every vertices projected I can easily compute squares vertex positions.
But this method is still long to compute no? CPU (even multithreaded) have to project (doing matrix computation) each vertices to know the position on the screen and generate squares coordinates…
Are we talking about the client side (I am)? If not, I really don’t know how do this on server side and any help to do this will be really appreciate.
Actually, I was thinking about compute vertex projection (and so, square vertices coordinates) on server side in a map ((four square corner * number of vertices)*pixels in a 1D map).
So stop draw once the vertex shader is compute (because I don’t need fragment shader). Then “redraw” using the real shader the whole and final draw.
I would like to use the ability to video card to quickly compute projection and store them in a map and use this map just after. This will avoid CPU-GPU transferts: Vertex buffer could be generated in a map by the GPU in a non-draw operation and used just after to definitely draw indices on screen.
Maybe I’m completely wrong but it surprise me if this wasn’t possible to do…
Anyone have an idea on this?
Yes, but it’s a CG app and it can potentially display a lot of triangles. That’s why I would like a efficient and fast way to do this.
Yes, if I need to use the “CPU way” for projection, I could easily remove vertices/square/indices display from computation. But I could like to see if a “GPU only way” is possible.
I don’t really have formal “timing requirements” (this is not video game).