NV: Pixel Data Range crash

Has anyone noticed windows stop errors when reading large pixel buffers using the PDR extension? Specifically, with Matt Craighead’s PixPerf test, my machine crashes when I try benchmarking buffer size much larger than 512x512.

Other weird behavior is mentioned in this thread:
http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/010962.html

…where someone got bandwidth figures that appear unreasonably higher than the standard ~41 MB/sec he got in other resolutions.

When I change PixPerf to use a fewer number of iterations (500 instead of 2000, it seems to work).

My configuration is a Dual P3 Xeon with an Intel i840 motherboard made by IBM, a GF FX 5800 running XP, Forceware 53.03

Any insights? Does this chipset have a crappy AGP implementation or something?

-Won

Yes, I get the same error, like i mentioned in the post you linked to. I have a VIA chipset and AMD CPU so I dont think its related to your chipset.

I’m sure it used to work on older drivers so I suspect a driver bug.

btw I think you meant 41 Mpixels/sec not 41 Mb/sec.

Yes, you are right, 41 Mpixels/sec.

I’m about to test it on an Athlon64 machine now, myself.

-Won

Well, with the 3Dlabs VP870 128MB installed in this VIA Opteron system, I get a peak PixPerf of 161 MPixels/sec. Pretty good percentage of theoretical AGP 4x bandwidth.

This card does appear to be chipset-dependent, because I was only able to get 100 MPixels/sec on an older Intel mobo.

Now, to test the GF FX…

-Won

I’m very surprised you get speeds that good.

I’m not seeing the failure you are. Could
you post a command-line that fails for you?

pixperf -read -readpdr -type ubyte -format bgra -size 1024

Basically, any size much larger than 512 will cause anomalous behavior. On the i840 chipset (I’m guessing it’s the chipset or GART driver or something), it crashes. It stopped crashing after I lowered the number of iterations. What chipset do you have?

On the Opteron, I get around 50 MPixels/sec for -size 512. For bigger sizes, I seem to get pretty unreliably large values (like 225 MPixels/sec).

I’m going to modify PixPerf to test if the ReadPixels are actually producing sensical results.

-Won

I modified PixPerf to push around a bitmap so I can be sure that something sensical is happening. Looks like the GeforceFX is limited to ~50MPixels/sec, whereas the 3Dlabs card goes up to 165MPixels/sec. The odd this is: with the GF FX, when there is nothing in the framebuffer I get very high bandwidth for larger read sizes.

Does this mean that the driver knows if parts of a buffer has undefined values, and short circuits parts of the transfer? Maybe the colorbuffer is decompressed on the client-side? Weird…

Anyway, the moral appears to be: don’t trust PixPerf for NVIDIA hardware beyond 512^2. You can only get about 50MPixels/sec.

-Won