Strange things with AMD card


Until now I only tested my application with Nvidia cards. In theory I can afford to do that, but recently I bought an AMD 5770 card to test my application with.

The application is based on OpenGL 2.1 and GLSL 1.2 and I only use one extension that is Nvidia-specific (and I test for that).

The application does work but there are very strange things going on. Sometimes (not too often) it crashes, there is an exception at some very ‘innocent’ lines like
(or at least execution stops there)

The other thing is that texture upload speed looks really low. I use alternating PBOs for that, the GL thread handles glBufferData, glMapBuffer and glUnmapBuffer and there is a separate thread where upload from system RAM to PBO takes place. I use image sequences that need to get updated every frame and this upload seems to need a lot of CPU intervention. I did not make measurements but the difference compared to the Nvidia card I previously used in the same PC (GTX260) is huge.

I realise this may look a little vague but I don’t know what other details may be relevant. I would appreciate some tips from someone with more experience with AMD cards.


Are you using GL_BGRA for your format parameter? If so, maybe try using GL_UNSIGNED_INT_8_8_8_8_REV as the type instead of GL_UNSIGNED_BYTE. (And make sure that your glTexImage2D and glTexSubImage2D internalformat, format and type parameters match) I had the exact same texture upload problem recently, and those fixed it. I’m guessing that the reason why was that the combination of format and type I was using caused my driver to pull the texture image data back to system memory.

You may not even need a PBO with them. :smiley:

I found that ATi cards tend to follow the spec better than other cards, but as a result they are seriously pedantic and irritating to work with. For instance, reading the render target pixel colour is supported on nVidia cards but not on ATi cards. Fair enough, it does say in the spec that the result is undefined. Also, unless you specifically define a variable ATi’s system leaves it undefined, nVidia’s blanks it out to 0. This led to a lot of weird distortion effects because I was not setting the fragment colour if I had nothing to add to it.

Thanks, these are easy to change and I did use GL_UNSIGNED_BYTE I think everywhere.

I changed GL_UNSIGNED_BYTE to GL_UNSIGNED_INT_8_8_8_8_REV everywhere but still there is a definite difference in texture upload speed between the AMD and the Nvidia equipped PCs. There could be other texture parameters that AMD does not tolerate well.

Unfortunately there is another mystery, too. I use clip planes in the application by writing to gl_ClipVertex in my shaders. It works well with Nvidia but sometimes (unfortunately I don’t know yet why it works at other times) it does not work at all with AMD.

Are there known issues with clipping planes and AMD?