When you use a class and instantiate an object, it would be only efficient if you wrapped it about some behaviour concept. Then, you have to call instances and their high-level functions.
No, you’re assuming a particular style of programming. Just using OOP does not entail this:
Sphere->RelativeToLight(2);
Sphere->RenderSelf();
It would be more likely to be organised as this:
GraphicsSystem->GetLightModel()->ActiveLight(GL_LIGHT2);
GraphicsSystem->BeginPrimitive(GL_TRIANGLE);
…
GraphicsSystem->EndPrimitive();
C++ or OOP in terms of graphics does not mean that the object represents a renderable object. Since a C++ version of OpenGL would not want to impose a rendering engine (ie, it would still be a low-level renderer), there would not be a class Sphere or any other renderable object of that nature.
Virtual functions are a run-time thing. So the compiler cannot inline the function at compile time.
So? Unless you put the implementation in a header file, the compiler can’t inline it either. Besides, a lot of the functions that you would want virtualized are too large to inline.
Also, if the compiler is 100% sure which instance is being used (say, you declare a class on the stack and use it without putting it through a polymorphic type), then it can inline it.
You might also take a cache hit when dereferencing that pointer.
True, although I think you mean “cache miss”, not a hit. If it were a hit, then the access takes only 1 cycle.
Of course, you correctly point out that you’d probably get more out of optimizing the algorithm than nitpicking at virtual functions.