Hmm… another thought, you probably do want to scale the float, because of the range you aren’t going to use all the bits in the exponent, that would buy you extra precision for the fragment evaluation vs what you store.

I wonder if a simple shift of the exponent bits would do it?

[This message has been edited by dorbie (edited 04-18-2002).]

I can’t remember the floating-point format, other than that there’s a sign bit, exponent bits, and “the rest.” But it seems to me that if hardware vendors decided to internally use a 0…1 floating-point buffer, they could use that sign bit for something else, and shorten the exponent part of the variable, using the extra room for the data, the number itself. Right?

Well, yes. But getting rid of the exponent and the sign bit so that numbers in the range 0…1 are the only numbers represented leaves you with just the mantissa. So you have a 32 bit value with the minimum being 0 and the maximum being 1. Which is fixed-point, and is pretty much what we have right now, if not exactly.

The whole point of a floating point as you point out is the exponent.

I can see that you might want to lose a few bits of exponent but goodness, not it all! This is effectively what I suggested with exponent bit shift. The LSBs would be zero but would give better fragment evaluation accuracy due to the larger exponent you start with.

You probably already have IEEE 32 bit floating point from the transformed vertices, so all those old inventive schemes don’t buy you anything. The only reason wierd schemes work now is that depth buffers have less precision than the transformed vertex values whether it’s eye Z or W you’re talking about. When you go from vertex to depth value you started off with more precision, when store as floating point then there is nothing to be gained through manipulation other than trying to preserve as much of the availble precision during fragment evaluation.

Well I never suggested getting rid of the exponent, just shortening it.

Also, dorbie, I was thinking of small fp variables (16 or 24 bits), not 32 bits. I doubt any modifications to the floating-point format will be needed for a 32-bit fp depth buffer, though at that bit depth, fixed point is probably fine.

[This message has been edited by CGameProgrammer (edited 04-19-2002).]

Point taken, I wasn’t trying to be critical of you.

BTW, on the same train of thought perhaps you want to simply move the exponent/mantissa boundary in the fp representation so that the zero MSB of the exponent becomes the zero LSB of the mantissa for fragment evaluation.

[This message has been edited by dorbie (edited 04-19-2002).]

Originally posted by dorbie: BTW, on the same train of thought perhaps you want to simply move the exponent/mantissa boundary in the fp representation so that the zero MSB of the exponent becomes the zero LSB of the mantissa for fragment evaluation.

Well, this is a no-cost harware operation, and really is only the same as removing the zero MSB (I suppose you’re refering to the zero sign bit?), so e.g. you would only use 31 bits out of the 32 bits on a 32 fp representation, since you know the MSB is always zero.

I’m not sure if a 16-bit floating point format would be useful, but a 24-bit floating point format should be able to perform very well:

5 bits exponent + 19 (+1) bits mantissa

Remember, the MSB of the mantissa is always 1, and need not be stored, so you get 20 effective bits of mantissa, and a range equal to a 32-bit fixed point format (the exponent can shift the MSB of the mantissa 31 bits). Numbers in the range: