Does working with GLdoubles improve precision at all?
I’ve been having precision problems (jitter in vertex positions as well as quarted textures) and thought of switching everything to doubles.
After a couple tests I quickly found it was twice as slow, but then I started googling about it and what I found confused me:
Some people talk about using doubles being totally useless since everything is converted back to single precision when sent to the graphics card.
Some other talk about opengl switching to “sofware rendering”, saying it was slower, and though they said nothing about it, you can deduct the precision was effectively improved.
double: AFAIK, GLSL doesn’t support double yet. NVidia hardware has supported this for about a year I think in CUDA-land (see this for instance). But I don’t think that’s been pushed up to GLSL.
half: OTOH, while GLSL itself doesn’t support different precision floats, NVidia GLSL has supported half types for many years. See NVidia GLSL Release Notes. This of course won’t work on anything but NVidia. Also note: if you don’t define #extension ### where ### >= 110, then you can get to the half* types on NVidia GLSL. Otherwise you can’t. If you use half, you can use the __GLSL_CG_DATA_TYPES preprocessor symbol to conditionally map them to the float types on non-NVidia hardware.
Also for more NV GLSL goodies, see NVidia GLSL Compiling Options. In particular, fastmath and fastprecision look interesting. Maybe one of those forces float to half?
No for some apps, especially HPC/GPGPU number crunching (OpenCL/CUDA), its essential.
(skip to: 1:08:34), OpenGL SM5 support will add double support for vertex attributes, uniforms, transform feedback, and internal computations in shaders.