Windows OpenGL drawing accuracy

Is it restricted to float type precision? If not how to force it to be double?

[This message has been edited by Belgrad (edited 11-16-2002).]

[This message has been edited by Belgrad (edited 11-16-2002).]

Hi !

There are support for both float and double in the api, but I guess you would like to use double internally instead of float ?

I think most OpenGL hardware use float, and you cannot change it, the reason I guess is that it’s only half the amount of data to send down the pipe line.

I am not sure if you would gain much with double anyway, the depth buffer is limited to 24 or 32 bits anyway.

Mikael

((( Very bad news. It is essential for my project to be able draw with double accuracy…

[This message has been edited by Belgrad (edited 11-16-2002).]

You can store your data in double format, then convert it to float when drawing.

A float is more accurate then a double.

Originally posted by Belgrad:
[b] ((( Very bad news. It is essential for my project to be able draw with double accuracy…

[This message has been edited by Belgrad (edited 11-16-2002).][/b]

Originally posted by nexusone:
[b]

A float is more accurate then a double.

[/b]

nexusone: Huh? Usually float has a 23-bit mantissa and double has a 52-bit or 64-bit mantissa. Look it up.

Belgrad: You must be doing something very unusual if you require the renderer to use double-precision. But you can always do the transformations yourself and just give OpenGL the final results.