I 've been making my first steps and there’s plenty I don’t know - including what tools are available to attack the problem in question. Also pretty much all my knowledge on programming and graphics comes from the net so I may very well be ignorant of established techniques.
At the moment I 'm trying to recreate a program I had written in another language to visualize conic sections. It displayed a cone, a plane represented by a square that the user could move around, and of course their intersection. It worked line this -
(i) 4 points were used for the plane, by connecting them with lines one obtained the square to visualize the plane, but they were also used to obtain the equation of the plane
(ii) The cone, a double - napped one so two cones in fact, was represented by parametric equation. By running a double loop through the parameters, I plotted/connected with lines many of the cone’s points,
obtaining a pretty satisfactory visualization of it.
(iii) Their intersection - by running a double loop again, it checked if the points of the cone satisfied the plane equation; if so it plotted them and that resulted in correct display of the conic sections.
Now I 'm rewriting this in c++ using opengl (sfml to setup the context and glm to handle the matrices). So far I 've done (i) - I have a square that I can move around. However the thing is that these movements - translations and rotations - are applied in the shader. Thus my question: what’s the best way to go on about (iii)? A “silly” way would be to do all the calculations in my program as well so as to always know what the plane equation is - but that would defeat the point of using opengl, no? I 'm wondering: can the shader pass info back to the program? could the shader itself execute (iii)? Is there some other technique I could use or perhaps I should rethink the entire approach?
I can provide code if it 'd help clarify what I mean. I would appreciate a nudge in the right direction. Thanks.