How can i convert gl_Position?


How can i convert the gl_Position inline parameter-type
from float to double?
All i have tried, has failed.
Has somewhere a little tip for that problem?


You don’t. gl_Position is a vec4, and you can’t redeclare it as a dvec4.

What is the reason for that limitation in shaders?
In opengl it was never a problem using object - and camera positions as doubles.

Believe it or not, you have never had ability to use doubles in legacy OpenGL.
Doubles are very new in GPU architectures, and they have a pretty limited usage in OpenGL through two extensions: GL_ARB_vertex_attrib_64bit and GL_ARB_gpu_shader_fp64 . Both extensions were approved 3 years ago, and you need SM5 hardware in order to use them.

BTW, why do you think you need dvec4 for clip-coordinates?

I have always set positions and rotations for objects and camera’s with doubles: e.g. for a camera:

sets the rotation matrix(4x4) buf from a quaternion qMem


and loads it in opengl with


and sets position (vector thats components are doubles)

glTranslated( position.x, position.y, position.z );

Simulations with big distances in a space, how in our solar system, needs big numbers.
E.G. A 1:1 distance from sun to planet pluto can’t be hold in a float.
Shrinking this big distances to floats makes all more complicated.

Yes, and the OpenGL implementation would convert them into floats before actually doing anything with them. You were never really using doubles; you’d have gotten the same exact results with glTranslatef((float)position.x, (float)position.y, (float)position.z);.

Do you know why all is limited to floats?
Is there a graphic card that can handle doubles?

The more recent graphics cards can handle doubles to a limited degree - that is usually a subset of the gpu units; I think it is 1 in 8 on a geForce card, all units on a Quadro card. The instruction execution time is roughly double unlike on a cpu. You can also emulate double precision in the shader but I would not recommend it,

Quite obviously. Why anyone would use doubles where floats serve the purpose, and use die size more efficiently for doubling the number of SP units?

I’ve already answered on this question. But I have to ask you: Why do you need doubles?
By the way, even on the cards that support DP, most calculation cannot be done in DP.
For example: ALL trigonometric, exp, exp2, log, log2, etc.

During answering to this question, I have found something interesting in the GLSL 4.30 spec.

How comes that sqrt and inversesqrt can be calculated in DP when the same hardware unit (Special Functions Unit - SFU) is used for sin/cos, exp2 and log2 and it is SP unit?

You are right, but I use 2 x sli nvidea gtx 690, so performance is not a problem.

But it is the problem. Why would GPU vendors cut performance to a half or more just to satisfy 1e-3% of the population? Especially when there is no reason even for them.

You still didn’t answer why do you need doubles. But, please, don’t repeat the story with the Universe. The whole galaxy can be drawn with the floats just fine.

P.S. Don’t get me wrong. You still have to calculate transformations on the CPU using doubles, but transform it to floats before passing to a GPU.

Even the GTX690 has FP64 performance that is 1/24th that of its FP32 performance. That is a lot of performance loss.

If you need to pass a dvec4 position to a tessellation or geometry shader before it’s been projected to clip space, then you can declare your own varying via ‘out dvec4 pos’. But to have double-precision gl_Position which is in the range [-1, 1], that’s like measuring down to the micrometer and then cutting with an axe. The rasterizer simply does not need that kind of precision. You’re going to be limited by the pixel samples of the framebuffer, which even at a 4x supersampled 2560x1600 resolution are easily representable with float precision.

I need double’s, to have better accurateness in physcal calculations.
Calculations with very big numbers or very tiny numbers with floats is
just to inaccurate. You can see the big difference in a simple “mandelbrot shader”.

Beside that, most calculations, how the camera, needs only multyplying
some quaternions and store the resulting position in an uniform dvec3.
If i need triginomertric functions, i use java’s math lib, that has mainly all result’s already as doubles.

The sqrt function is very often used (e.g unit vector’s) in calculations. Only using floats ,
especially in subsequent calculations, is not a good practice?

If i give gl_Position values, no value is in the range [-1,1].
E.g. if i use
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
the values from gl_position.x, gl_position.y,gl_position.z,gl_position.w
are seldom in the range from [-1,1].
First after a perspective divide (gl_position.x/gl_position.w, gl_position.y/gl_position.w, gl_position.z/gl_position.w)
we have the values in that range.
Or have i overseen something?

If i calculate e.g. gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;

All values (gl_Position.x, gl_Position.y,gl_Position.z) already in the range [-1,1]??
Comes that not later with the perspective divide (gl_Position.x/gl_Position.w, gl_Position.y/gl_Position.w, gl_Position.z/gl_Position.w)?

[QUOTE=Worker;1249527]I need double’s, to have better accurateness in physcal calculations.
Calculations with very big numbers or very tiny numbers with floats is
just to inaccurate.[/QUOTE]
I agree, but you have asked for gl_Position. Yes, there are problems with calculation. Unfortunately, it requires some gymnastics.

[QUOTE=Worker;1249527]Beside that, most calculations, how the camera, needs only multyplying
some quaternions and store the resulting position in an uniform dvec3.[/QUOTE]
It can be downcast to floats without lost of precision of the orientation. Rotations smaller than 1e-4 degrees are not visible.

I was talking about trigonometric functions calls in GLSL, not in Java. You are happy if your shaders are simple and do not require more complicated calculation using trigonometry, logs or exponents. In that case you’ll find out how precision of FP is terrible.

That’s what we all tried to explain. Single precision is terrible for many applications, but not for gl_Position.

The gl_Position parameter ist a value in “clip space”
before a perspective divide is made?
E.g. (gl_Position.x/gl_Position.w, gl_Position.y/gl_Position.w, gl_Position.z/gl_Position.w)
With floats and with doubles the values are both in the range[-1,1] (the visible)
Only the accuracy with doubles is better. Its how a vector one in float or in double, who becomes a unit vector.
Do you see what i mean?

Then what is, if my clip frustum has a far plane z-value that is bigger as a float,
and a visible point in this clip space is also bigger as a float?
For that, gl_Position must be a double.
For a later perspective divide it was not a problem in my opinion.

<Rotations smaller than 1e-4 degrees are not visible.>

Think about a realtime simulation. Earth rotating or rotating earth around the sun.
The rotating steps per second or frames per seconds (2500-3000 )
are realy tiny, but must be drawn with correct values. Latest in a hour
you can see that the sum of this little steps has the earth a bit rotating.

Think about a realtime simulation. Earth rotating or rotating earth around the sun.

You should be doing your computations for the current position of the Earth in double-precision, then converting to single-precision when rendering. You need a separation between “the values I use in my physics simulation” and “the values I use to draw with”.

Even if that physics simulation is happening on the GPU, you still shouldn’t be writing its results directly to gl_Position.