I’m trying to get hardware supported filtering done on 600x400 pixmaps that are pushed into the GPU (GeForce 6150 on Linux) at 30 FPS or better, using glDrawPixels().
I tried convolution filtering, but that’s done in software on my hardware and way too slow. Could the MULTISAMPLE feature be used for pixmaps? Or any other hardware supported technique? Would a better GPU do the trick?
Thanks for any help,
Are you interested in performing an arbitrary convolution on an image, or just doing some (full scene) anti aliasing (which by the way is not to be confused with the former)?
Hi, it’s the latter that I want, the full scene anti aliasing.
The pixmaps that I’m copying to the GPU are scaled down, and as a result produce artefacts that need to be filtered out. The problem is the real-time character of my application. I need a high bandwidth and was hoping for some hardware help in this matter.
Let me get this staight. You need to texture some quads, and you want full scene anti aliasing?
Nope, my application atm is no more than a video player that draws pixmaps stored in computer RAM into Video RAM using glDrawPixels() at 30+ FPS. I optionally use alpha blending and scaling (glPixelZoom()) and as a result of the scaling I need some serious filtering (smoothing) done in hardware on the resulting pixmap.
No 3D stuff at this stage yet. No texture mapping either.
FSAA cannot be applied to images/textures. This is a feature that smoothes the appearianse of polygon edges. What do you mean by “image filtering” anyway? Usually, in 3D graphics it refers to image scaling (drawing the image larger or smaller than it actually is). If you want to apply a custom image filter to modify the image (Gaussian etc. filter) then you could try to use shaders (if your card supports them). Otherways you are bound to software processing. Ok, you can probably find some fancy multi-pass alorithm that does it for you
Try textures with the PBO extension for quick async texture uploads. Bilinear texture filtering if specified is automatic and fast.
Ah, we posted at the same time
Best solution would be to use textured quads, as the filtering will apply automatically. You should use linear filtering with mipmaps.
Thanks for the help guys.
As a matter of fact I did try linear mipmap filtering using textures, and it looked great. The problem was the loading of the pixmap into the texture buffer, that took ages, and I dumped the whole idea since I need high FPS values. I wasn’t aware of the PBO extension, I’ll check it out.
I guessed that FSAA wouldn’t work on images, and it’s off the list of options now, thanks for clearing that up.
The ‘image filtering’ that I need has to correct for dropped vertical or horizontal lines when scaling down a picture. The linear mipmap stuff worked great in this respect.
Could one of you point me to some info about this PBO extension please?
Good point about the mipmaps (trilinear filtering). You might even look at anisotropic filtering, if the aspect ratio of your video is likely to change (still need mips and a specified number of taps). Again this is just another texture filter and is completely automatic once setup properly.
By the way, if you’re new to the whole extension thing, you may want to look at glew or glee or some other extension loader that will automate the process for you.
Ah! Pixer Buffer Object, so that’s what it stands for.
And yes, I’m new to almost everything regarding OpenGL. I’ll check out the glew/glee stuff.
Thanks for the help guys, I owe you one. I’ll let you know how it worked out.
Okay, here’s my little report. The use of a PBO when drawing a pixmap directlty into the framebuffer actually had a drop of 50% in performance (using an Nvidia 6150 GPU here with shared memory, this thing is dog slow).
When using a PBO to copy a bitmap into texture memory I got a 50% increase in performance. Good, but not as dramatic as I’d hoped.
The moment I start to use mipmapping, even when using the so called hardware accelerated mipmap option, the performance drops to around 2 FPS, lol.
I started experimenting with fragment shaders, hoping I’ll get a nice blurring filter done using shaders. There’s something I don’t understand, and hope you guys can help me here.
I wrote a super basic fragment shader that sets the output color to a fixed value (just for debugging).
When I run this shader while drawing a pixmap into the framebuffer using glDrawPixels(), performance drops to a near halt.
However, when I draw the same pixmap into the framebuffer, then copy the framebuffer into a texture, and only run the shader when finally mapping the texture onto a quad (the size of the window), performance is much, much better.
How can this be explained? Wouldn’t both methods result in the same amount of fragments that have to be fed through the fragment shader?
DrawPixels is usually not or less accelerated then texuring, it is preferable to use textured quads.
Ah, I see. It is that simple. :eek: