I am starting to use GLM in my project, and I saw “fast_tigonometric.inl”, which computes the trigonometric functions by itself, using approximations.
However, I am wondering if there is a way (I think there is not for the moment) to use precomputed values stored in some arrays for trigonometric functions.
Something like “glmPrecomputeValues(…parameters for number of values, with a default being given…)” at the beginning of the program would be nice
That’s a good idea but it’s not supported yet.
I also wonder if boost::mpl doesn’t have such thing?
And it becomes a feature request:
Well…that’s cool, thanks, I’m looking forward to this feature
As for boost::mpl, I just had a very quick look at the documentation, and it’s a template metaprogramming library.
I do not see the link with precomputed values, which can be of use at runtime, while meta-programming would be for compile-time constants…
Plus, it’s a matter of personal taste, but I do not like Boost ^^
Ohhh you want a runtime build of the array … huummm why not. Would it be really useful.
I’m afraid by the fact it would imply a global state that I consider bad design actually.
You don’t like Boost … really? OO
You have to change your mind and have a look into all the awesome features: thread, filesystem, conditional variables, signals, asio, etc.
Yes, maybe global state is not a cool design for GLM, and sure this feature would be different from what is currently done in the library.
If you don’t like the idea of such a “setup” function, what about implementing it as an extension ? It could be disabled by default it you really do not like it, in setup.hpp ^^
I see two possibilities for precomputation :
- having a setup function, which precalculates as many values as we ask it to precalculate
- having a static array in a header file, containing the values, as is done for example for the data of the teapot in FreeGLUT. The problem is that we could not choose the precision we want at runtime, and the executable would be bigger, because it would contain the data itself.
Personally, I would go for the 1st option ^^
About Boost, a debate on it could last longly ^^
Basically, what I do not like in it is :
- installation of it is awful, and not at all standardized (bjam and others…)
- each “component” of Boost has a HUGE dependency on other components of Boost
- it clearly slows down compilation
- its intrinsics are very hard to understand. This is a problem when using template-based components (like Boost.Python for example), where compiler errors are really undecipherable. This is also a problem when debugging, for example when the code uses Boost.Bind…
Boost wants to enhance C++ itself ; to my mind, if we want to add features to the language, we had better change the language itself. For this purpose, I like the D language, but it is still not mature for me (2 different standard libraries and const-ness sucks in D 1.0).
Boost can be build with CMake know which is so much better that bjam … even if bjam wasn’t so bad I think.
With a precompiler header, I don’t think Boost makes the build really slower.
Some parts of Boost want to improve the C++ Language, true but most work on the library. Boost is the ground where C++ grows so anyway it’s a step in the future of the language.
Yeah, CMake support is great
As for compiling/linking time, I experienced it when working for a company which used it, and they were using precompiled headers…
As for the “language improvement” : I think things like meta-programming facilities, BOOST_FOREACH, boost::bind…etc should be built-in in the language. Boost is “on the limit” of the language, exploiting all features from it (in a very clever way, sure), but this is somewhat a way of compensating the limitations of C++, which is not a good way to go, in my opinion.