I have some code, written on Windows, using the float buffer extension for doing some maths on the GPU. The code works as expected, so I decided to port everything to Linux/GLX…
I did a straight port to Linux/GLX w/o major problems, but when running the app I do not really get a float pbuffer, it seems - all float values I read back from the pbuffer are “clamped” to 1/256 indicating an 8bit buffer. So my question is, do float buffers work on Linux at all, is there something special I have to consider to make them work?
Any ideas?
BTW this is on RH 9 using the latest Nvidia driver with a nv30 (ASUS GeForceFX 5950). The Windows version was tested on the same PC…
I have some code, written on Windows, using the float buffer extension for doing some maths on the GPU. The code works as expected, so I decided to port everything to Linux/GLX…
I did a straight port to Linux/GLX w/o major problems, but when running the app I do not really get a float pbuffer, it seems - all float values I read back from the pbuffer are “clamped” to 1/256 indicating an 8bit buffer. So my question is, do float buffers work on Linux at all, is there something special I have to consider to make them work?
Any ideas?
BTW this is on RH 9 using the latest Nvidia driver with a nv30 (ASUS GeForceFX 5950). The Windows version was tested on the same PC…[/b]
Sorry to reply myself, but I have found the problem… Somehow the files glxtokens.h and glxext.h have not been updated correctly when installing the Nvidia driver, resulting in messed up GLX tokens. Once I downloaded and installed these files from Nvidia’s CVS solved the problem.
[This message has been edited by GnorpH (edited 03-01-2004).]