currently I’m having weird issues using atomic operations on buffers that are resident and adressed by a GPU Ptr using the Nvidia GPU shader 5 extension.
While normal read/write access is no problem at all, my compute shader fails to link when using atomic function with an offset into the buffer - and only when trying to offset it! I don’t get any compilation errors or warnings though.
I’m going to try to work around this using a regular buffer binding - but I have the suspicion that this might by a NVIDIA compiler issue since I’m also having issues running Cyril Crassin’s linked list A-buffer implementation here. (Compiler error on atomicExchange functions).
Issue appears with 340.43 beta drivers as well as with 337.88.
Can anyone confirm this or point out what’s going wrong?
And this is working fine so far. However I’m currently refactoring my whole project in the spirit of ‘approaching zero driver overhead’ and therefore trying to get around state changes like buffer binds etc. as much as possible. That’s the reason I was trying to also do the atomic operations on a resident buffer addressed by its GPU address. I know it should be working because of the order-independent transparency implementation by Cyril Crassin I posted above - which in fact doesn’t run ony my system right now for probably the same reasons. That’s why I’m suspecting this might be a driver issue.
By the way the linked zip of the order-independent transparency implementation already contains a compiled binary, it’d be really interesting to know if other people on Nvidia HW have the same issue running it (it’s Nvidia only though, because of some vendor specific extensions).
[QUOTE=nattfoedd;1260221]…the order-independent transparency implementation by Cyril Crassin I posted above - which in fact doesn’t run ony my system right now for probably the same reasons. That’s why I’m suspecting this might be a driver issue.
By the way the linked zip of the order-independent transparency implementation already contains a compiled binary, it’d be really interesting to know if other people on Nvidia HW have the same issue running it (it’s Nvidia only though, because of some vendor specific extensions).[/QUOTE]
Specifically, the original version that doesn’t support AMD (the 2nd one allegedly does).
Had to make a few tweaks to get the C++ source compiled on Linux, but nothing big. Also based on this post, I nuked all the "inline " references in the GLSL shader source as these were causing compile errors with the latest NVidia drivers.
With those few mods, Cyril’s OIT demo compiles, links, and runs just fine on Linux with the NV 331.79 drivers (GPU: GTX 760).
I also read about the inline issue in the comments, though I kind of confused about it, since I can’t find any inline in any of the shader code Searched everything a couple of times now and only found inline functions in the C++ code. I’m feeling entirely stupid now, but in which files did you remove the "inline"s?
Going to test this at work tomorrow with the Quadro drivers as well - if it’s working there it’s probably indeed a Geforce driver issue on Windows.
I just rechecked, and you’re right. But yet I was definitely getting a bunch of errors out of this code whining about inlines not supported in shaders. I used perl to search/replace all the "inline " refs with “” and that seemed to fix it. But going back I don’t know what was causing that or why that seemed to fix it.
Anyway, recreating exactly what I did, here’s the procedure:
Thanks again for all the instructions! I’m going to try to reproduce this once I’m at home tonight.
Just for quick info: The demo does run flawlessly on a Quadro K6000 with driver version 333.11 here. And it does not run (same error as with the GTX780) on another system here that has a Titan running with driver version 335.23. So now we’ve got 5 drivers and all Windows Geforce drivers fail to compile/link the shader and all non Geforce/non windows drivers do. So I guess its time to file a bug report to Nvidia