I’ve got a project that needs SLI to work, and I’m having a heck of a time figuring out what is making NVidia’s driver unhappy. I’ve commented out nearly all my code, and added in 300 full screen quads to test with. SLI works as expected, giving me 45fps with a single card, and 90fps with SLI.
I then started slowly commenting back in small sections of code, and it appears that as soon as I make a certain call to glViewport (but not all calls) or render from a specific offscreen render target, I drop back to 45fps. It seems almost arbitrary, but surely there are some hard and fast rules at play here. I know glViewport may cause slowdowns, but I’m still rendering to the entire buffer every frame.
I have of course read the 5 year old slide show from NVidia, and read a thread or two on here from many years ago, but the information is very sparse. All of my “simple” opengl tests I’ve written over the years seem to work fine with SLI, so most of the time it does of course ‘just work’.
I’ve tried clearing depth/stencil/color on every render target I render to every time I bind it, thinking that’d tell the driver there’s no need to share last frames results with the next frame…no change. I’m not specifically using any graphics buffers from the previous frame ever anywhere in my code, but clearly the driver thinks I am. Is there any way to get insight into what exactly is going on?