# eye separation calculation

There are various possibilities to calculate the eye separation.
For example:

• 1/30 of the focal length
• Interocular / Real Screen Width
• other sources recalculate it for each scene
Why are there so different solutions?
How come the resulting eye separation is finally around 2 cm instead of my original 6.3 cm?
My eye separation stays the same no matter whether I move, sit in front of a bigger screen or the scene changes. Why does it need to be adjusted when faking 3D?

2 ideas :

1. a big problem with stereo display is the focus distance.
Our eyes are trained to link focus distance and converging angle of eyes.
With a stereo display, the focus distance is always the same (~60cm for computer screen, a few meters for tv, a few dozen meters for cinema). If you kept the same 6cm eye separation for all scenes, you would have very large discrepencies between focus distance and stereo depth.

2. have sense of depth for every scene, both small scale (desk) and huge (grand canyon landscape). In reality, past a few dozen meters, our eye separation is too small for any depth sense (“everything is far”), that is why binoculars often have much larger occular separation.

For both of these reasons, it is better to tune the stereo depth to be gathered roughly around the real world screen distance, for example a third of scene in from of screen, two third farther. This means eye separation is changed according to the scene, and the screen.

I think a realistic and fixed eye separation would work better with head tracking, so stereo 3D would always be at the correct scale.

You recommend to keep the eye separation in a way that makes sure that not more than a third of the scene is in front of the screen. Do I understand that correctly?

Why should head tracking make a fixed eye separation correct?

Do you know how the formulas I mentioned in my first post were achieved? Simply by trying out?
“Interocular / Real Screen Width” gives a ridiculously small result when having a big screen.
As well I don’t completely understand the “1/30 of the focal length” formula. Let’s say I have objects at z value -5, 0 and 5 (-5 is closest) and I want the object at 0 appear on the screen (focal distance). The camera is located at -8. This gives an eye separation of 8/30. When I move the camera further back, I get for example an eye separation of 15/30. This means that the parallax will be bigger although the camera is now further away. Why should the parallax increase when moving away from the objects? That would mean that the objects seem to come closer to you although you’re moving away.