I am working on the same thing.
I also want to know why there is no opengl64.lib
There is really only one place I found some OK information but still not enough
community khronos /t/64-bit-opengl-for-64-bit-windows-xp-amd/51198
I want to make sure I’m using OpenGL as fast as possible. If my application is feeding the graphics card double precision floats will it have to truncate them to single precision? To optimize, should I always send it single floats?
This would be fine for regular graphics rendering which I will mostly be using it for but I was also planning on trying some scientific computing at 64 bits on the GPU. So I got a Titan Z because its really one of the only graphics cards with a high double to single performance ratio. ie. Most graphics cards are designed to support single precision and can do doubles but only at like 1 tenth of the FLOPS or worse. They just dont usually have the hardware optimized for doubles. 1 to 2 ratio is optimal / ideal.
takes a while to look through this massive list
if you save that page as a pdf at 25 percent scale it wont cut off any of it
pg. 11 GeForce Best (1080 Ti etc) and
pg. 21 Quadro Best (Tesla V100 etc.)
the 1080 Ti has 10800 FLOPS for singles, and 300 for doubles
the Titan V has 12300 singles, and 6150 doubles
the Tesla V100 14000 singles, 7000 doubles
the only relatively inexpensive ones that have a high double to single ratio are:
Titan 4500 single, 1500 double
Titan Black 5100 single, 1700 double
Titan Z (two T blacks put together) 8100 single, 2700 double
Can anyone help us understand why there is no opengl64.lib ?
What happens when you send double precision numbers to the GPU using openGL functions with the opengl32.lib ? Does it send them to the double precision hardware on the GPU or does it do some weird converting like openGL is a 32 bit bottleneck?
I am also trying out the static linking, haven’t really used any functions yet I will let you know if it works for me.