I read that when implementing huge worlds there is little choice but to resort to integer coordinates. How are these integer coordinates transformed? Using double matrices? What if such matrices are not available (on fixed pipeline hardware)? The most significant problem I see is the initial translation to the origin. A lot of significant bits can be lost. Furthermore, this origin needs to be specified with integers itself.

Just subtract the (X,Y,Z) integer world co-ordinates of the camera’s position from the (X,Y,Z) position of each object (also kept as integers).

The result is converted to float and used as the position to create the Model matrix.

As all objects are now positioned relative to the camera, the View matrix does not include the camera position anymore, just its rotation.