Can I use a depth buffer both for computing depth (read it bound as texture) and also for testing Z when writing light fragments in a deferred renderer? Both at the same time. Of course, with depth writes disabled.
I read the spec and couldn’t find anything about it. It only says that having it bound to a texture and using it at the same time for writing depth is undefined because you’ll be reading/writing at the same time. But in this case it’s just depth test and depth read… not depth writes.
I believe that it should work but there is a certain risk.
From what I understand, the memory layout of a RTT is linear for rendering purposes, just like it is for the main framebuffer. For regular textures, a certain memory layout is used for good performance (a swizzled texture?). If a driver chooses to keep the RTT ALWAYS linear then it should not be a problem. If a driver chooses to reserve some extra memory for the texture and swizzle stuff around…then again not a problem. But if a driver swizzles and then deletes the original linear memory layout buffer, then you might have problems (depth testing not working or behaving strangely).
Looks like dx11 added a flag to enable read only access to bound depth-stencil views, so that they can be bound for both reading and writing. Oft requested as this sort of thing has been I’d bet (going out on a limb here) it’s on the ARB’s todo list in one form or another.
But if a driver swizzles and then deletes the original linear memory layout buffer, then you might have problems (depth testing not working or behaving strangely).
That would be contrary to the specification. The spec only says that you get undefined behavior when you both read and write to the same layer of the same texture. The behavior of reading in 2 places (hardware depth test and shader reading) is defined. So if there is some circumstance that causes it not to work, it should be a driver bug.