Multisample depth buffer precision issues

Does anybody know if NVIDIA and/or AMD perform any sort of depth compression on multisample depth buffers? I’m implementing stencil routed k-buffer and I’m finding that depth values don’t always match up correctly with what was rendered. And both AMD and NVIDIA hardware behave differently. I’m assuming there must be some sort of optimization going on or something.

Are you explicitly allocating a specific depth format? Which (GL_DEPTH_COMPONENT24, GL_DEPTH24_STENCIL8 etc.)?

How are you reading back the values?

Here is one thing I found recently this reminds me of what you’re talking about:

Turns out it’s a bug in their support for sample masking. I created a new post under the Drivers forum.
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=306544#Post306544

EDIT
The precision issues still exist. The post I created in the Drivers forum is a related but separate issue.