Double Precision

Thank you a lot. Thats one very good possible solution to the problem!

Is there anybody with another suggestion :smiley: .
But I think the one from Alfonse is a very good one.

By the way, you can set the depth buffer to be a 32bit floating point value as well (usually done via render to texture).

Lastly, the double support in GL4 hardware is limited to attributes, uniforms, varyings and internal arithmetic of shaders. I have not seen anything about image data being doubles.

Though, there is another very, very twisted way to get “64bit” image data under NVIDIA, but I suspect it will perform poorly compared to breaking up your passes and using the usual way:GL_NV_shader_buffer_load and GL_NV_shader_buffer_store

The basic idea is make a buffer object that can hold the image data, i.e. sizeof(something)widthheight bytes. I don’t see double support explicitly mentioned (but that does not mean it is not there), but since you are computing values in the range [0,1] you can use 64bit integers.

My two cents, latest CUDA still doesn’t support double precision float as a texture format. Only possible to use int2 and cast it to double.

Damn! …
… is it so hard to make 64-bit textures … :frowning:

… is it so hard to make 64-bit textures …

Besides the facts that:

1: nobody needs them, so there’d be no real point in devoting transistors to something that won’t be used.

2: it would double the bandwidth requirements for accessing such textures.

3: using them for depth buffers would require all the neat tricks used to speed up depth testing (Hi-Z, Hierarchical-Z, etc) to use double-precision math, thus slowing them down lots.

It wouldn’t even solve your problem anyway (assuming you have a problem. Since again, you’ve never stated why you think you need double-precision). Because OpenGL still defines the positional output of the vertex shader to be floats. And therefore all the fixed-function transforms from clip-space to window coordinates are done with floating-point math.

Because if i could use 64 bit precision, I could transfere the texture to the GPU (with CUDA) and do calculations (extract nodes, etc.) in parallel. I think this would be faster than doing it CPU only. And I need double precision because I must compare the method with a professional one, also done in DP.
It is useless to compare 32bit with 64, I will always be worse.

Doubles are over-rated.
I learned a lot going from scientific computing (people use doubles everywhere) to graphics/game engine work.
Its all about how many scales you need to resolve, and there are many tricks to isolate the various scales. Much use of doubles is overkill to avoid taking more care to frame to problem around the appropriate scales.
A very worthy pursuit to try to understand exactly when doubles are needed and when float can do (still working on it, myself)!

Hi, the problem is solved! There is no double precision support yet for the opengl buffers. Thank you for the intensive discussions!

Best regards,
Marko