I want to create a simple 3D CAD-like application with OpenGL. One of the main features of a CAD application is cursor snapping. Lets say I have a scene full of lines and I want to snap the cursor to the nearest line endpoint. As the camera does not move constantly like in a game engine, I thought to calculate the screen coordinate of each point, store them in a vector of vec2’s and then calculate the distance of each point from the cursor screen coordinates. However, the transformation from the world coordinates to NDC is being handled by the CPU and not by the GPU. Is this considered poor programming practice? What do you think is the most efficient way to do this?
You can use transform feedback mode to capture vertex shader outputs. However, that’s going to produce a value for each vertex, even if the primitive is entirely outside the view frustum. So you may want to consider culling vertices which are far from the cursor, either before or after transformation, either on the CPU or GPU. The most efficient solution is going to depend upon what data you have available and the hardware.
Thank you for your reply! I will try to implement the vertex culling, sounds efficient.
I can’t tell, but I would do it that way … somewhat depending on what kind of task else is at hand. Say you want to move an endpoint of a line in 3d space, you’ll face problems that you don’t meet in d2. Since you’r in CAD, you might consider using orthographic projections and, voila, you’r sort of back in or closer to d2. Don’t be surprised to find that, in terms of accomplishing practical tasks on objects/models, 2d is ever so sharp and simple to deal with compared to 3d.
In my work, it feels obvious that the pc works half/half in cpu/gpu … cpu dealing with mouse-drag, hit-testing and such, gpu drawing. And, I’m surprised on how much (valuable) work the cpu can accomplish!
This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.