So I’m using point sprites, vertex buffers, interleaved arrays, pbuffers, mipmapping, vertex and fragment programs and I’m getting 4 seconds per frame using an NVidia 6600gt for about 800000 point sprites. When I run the same code in my ATI 9800 pro, I get 60 fps!!! WTH? What is the problem? I’ve heard that old ATI drivers had problems with point sprites, but I never heard anything about NVidia problems with it. My 6600gt is not PCI Express, but I have tested in a different system using one 6600gt with PCI Express with similar results.
How am I so sure the problem is with the point sprites? I have an implementation where everything is the same except that I’m using triangles - and the nvidia card is faster. I also have tested a version with GL_POINT_SPRITES_ARB off (and also texture off, otherwise the screen would be black), and the result was 18 fps for 3x800000 points. Same version with Point Sprites ON and I got 4 seconds again…
I hope to get a test run using a 6800 next week, but I think it won’t work. I did try the latest 77.72 drivers and some old 6x.xx versions as well.
Here I have the same question with code:
http://www.gamedev.net/community/forums/topic.asp?topic_id=326471
Thanks!