I want to do something a bit unusual.
I need to simulate a ghosting effect for a simulation. I need to be able to control it, so that only certain parts of the image are ghosted. For instance, I can ghost the entire scene, but only part of the HUD. BY ghosting I mean the effect you can get with an old TV when the antenna isn’t tuned right.
I was thinking about rendering the image into a smaller buffer, then blending it into my original buffer, with an offset, for the stuff that is ghosted. Then I can render the non-ghosted material on top of that. Is there a better way?
I also need to simulate horizontal loss of sync. Again, this applies to only part of the image. I was thinking about doing the same thing, but blending with a random offset that changes by line.
These techniques seem expensive, and I need to keep a decent frame rate. Does anybody have any ideas, or know of a good parallel topic to research?
Id love to do stuff like this too, but like alot of cool effects it seems to me youll need to be reading pixels from the screen, which in my experience is horribly slow.
Might even be faster to render a second pass with transparencies and distorted vertices.
Either way it should be no problem to keep the effect away from your HUD. For instance if you wanted to use the image buffer method you could simply do this:
render main scene
read pixels (from back buffer) into your image buffer
render image buffer
I guess that should work fine
[This message has been edited by moeko (edited 02-01-2002).]
I’m assuming you’re doing this for some kind of embedded simulation?
Assuming you own the back buffer, you can copy the back buffer to a texture once you’re done rendering, and then use that texture to draw to the screen again. This is usually very fast, as it’s all on the card/chip and doesn’t have to touch the host CPU or bus. The call you need is glCopyTexSubImage().
Drawing with ghosting then reduces to drawing a quad in Ortho view with the appropriate location and texture coordinates, with some amount of blending. The extra overhead isn’t really THAT much, it’s on the order of two full-screen overdraws (one for making the texture, one for blending in the ghosting).
The loss of sync in parts of the screen is probably most easily done by rendering a black triangle from scanline n column 0 to scanline m column x, and then render a skewed quad with the appropriate texture coordinates with the left and right sides (n,0)(m,x) - (n,w)(m,x+w).
If the implementation supports SGIS_generate_mipmaps for this operation, then your mip maps will be generated for you, too. You can use that for filtering effects, by setting the LOD_BIAS when rendering again. That’s not really necessary for ghosting, though.
Modern graphics chips are all kinds of fun
[This message has been edited by jwatte (edited 02-01-2002).]
Thanks for the replies. Yes, this is an imbedded simulation.
This is for a simulation of a remote control device, that is operated by a user, who sees information from a camera in the device.
Normally, everything is fine. But when there is interference, the picture it sends back gets screwed up, along with the symbology it is sending. Some of the extra symbology, provided by the remote control, does not get messed up, which is why it was hard to do this in hardware.
I should have access to any GL buffer, so your suggestion may come in handy. This looks better than what I would have done.