I’m trying to make my sky glow (as it’s made in Tron2.0 everywhere, or the false HDR in Half-Life 2, but only for skybox, and not for objects). I’ll have some big performance problems due to heavy pixel read and copy if I do it my way.
Here is how I intend to do it:
- First render the scene, correctly setting depth buffer.
- Clear frame buffer to black, but not depth buffer (actually this is done at the beggining of the next frame)
- Then render the glowing parts of my skybox behind everything using this z-buffer(with glow = SkyboxRGB * SkyboxA). Pixels will be set only where there’s no geometry. (problem #1)
- Then grab the frame buffer into a 64*64 texture (problem #2)
- Then blur it 2 times (horizontally, vertically)
- Then at the end of the frame (because at this point I need to draw geometry and real skybox) render the blurred texture with additive blending on top of everything (scaling the 64*64 to full screen size, with glOrtho projection and everything)
This is supposed to make it (tell me if I’m wrong at this point).
problem #1 I’m rendering 1024768 on the screen
problem #2 I’m reading 1024768 pixels (glTexCopySubImage2D)
I only want a 6464 glow texture, not a 1024768 … but how can I render correctly to the frame buffer in 6464 with the z-buffer set in 1024 ?
A solution would be to scale down z-buffer to 6464 …
is there something to do ?
start with 64x64 from the beginning
or at least use a power of size 2 viewport 1024x1024 (I think this is excessive)
and when you have your texture, use auto_mipmap + LOD.
As Vman said, just start with a 64x64 texture - and don’t bother blurring. If you stretch a 64x64 texture over your entire frame buffer it will look blurred any way.
This will mean there is no need to read the data, and no need to resize it - so you can get a performance boost from render-to-texture or CopyTexSubImage.
Hi; im trying to do the same effect…u r saying that i have first off all, setup my viewport as 64x64 , render everything (only on z buffer), and then make my tests (in this case to glow the skybox, only draw where zbuffer = depth color)? but the precision of zbuffer( now restricted to 64x64) will produce some errors, right? or the zbuffer is still 1024x768 (i dont think so…)? would it get better if i use a pbuffer?
One point must have gone obscure in my explanation : I use the zbuffer from the previous frame, there are no two separate mesh rendering passes.
I suppose you told me to do a z-only render pass in a 6464 frame buffer, then to copy that in a texture, then render the real geometry, and then render with additive blending the 6464 glow texture stretched on the screen (no specific blur passes).
I’d like to avoid to render two times the geometry (first for z buffer, second for real).
Maybe I did not understand your explanations, thanks for clearing that point.
Tom, yes I already read that. I based my work on this paper actually and I want to achieve exactly this effect, but, in my particular case, I only want the skybox to glow.
They don’t fully describe the method : at one point, they have to grab the frame buffer, and then they blur it. I don’t know how they grab, either the whole frame buffer after all glowing parts have been rendered (I suppose this is what they actually do), either a small part of it (for each glowing object, they could grab a rectangle that includes the glowing object, first render this glowing part with depth buffer set, and then render with additive bleding the real object, and then set back to normal blending for the other objects).
With or without blurring it, I can’t grab a full 1024768 frame buffer. And if possible, even before grabbing something, I don’t want to render glows in a 1024768 frame buffer. If I want to “scissor” only the glowing part, it may fall to the whole screen. So I really want to pass my z-buffer from 1024768 to 6464, and I want to do that without rendering the whole scene in a temporary 64*64 frame buffer, only for this glowing effect.
Actually doing a z-only first pass in a 64*64 frame buffer could have a lot of advantages (occlusion culling), but I’m not intending to do this at this point, maybe later depending on the scenes (low poly scenes fit very well with this).
Please, please, please, give me helpful feedback.
Oops, I meant SeskaPeel.
Have you actually tried grabbing a 1024x768 texture? I’ve got an app that does three fullscreen glCopyTexSubImage2D() calls per frame and that still runs fast enough. For me, that is – what’s your target system?
You could try to be smart and only grab subsections of the screen, but I wouldn’t bother unless I were absolutely sure that the render-to-texture stuff were the biggest bottleneck.
The real problem is probably not rendering to the texture, it’s that blurring a 1024x768 image might be painfully expensive. Unless your polycount is really high, you will be fillrate-limited. In this case, simply doing an extra pass to a low-res texture is almost guaranteed to be a win.
For texture animation purposes, I was grabbing a 512512 bump map, and it killed my frame rate from 60 to 20-25.
 wrong, it was a 10241024 texture (so I was running in 1280*1024) [/edit]
I am definitely fill rate limited (even fragment program limited), but adding a z-only pass won’t be for free : I will be transfer limited in the first half of the frame, and fill rate limited in the second half. It will add a second bottleneck to the first half of the frame.
I could render rough geometry if I had LOD implemented, but I have not, I could try to manually define “glow occluding” geometry … and so on …
I just wanted to know if there was some way to get a 64*64 z-buffer without creating a z-only geometry pass.
And for blurring a 1024*768 texture, I never intended to do it I even did not intend to grab it.
My target system is GeForce FX 5900 Ultra on P4 2.4GHz … when you say you were doing 3 full frame texture copying, what was your screen resolution ?
[This message has been edited by SeskaPeel (edited 11-10-2003).]
640x480 on a Quadro FX 1000, P4 2.8 GHz. But since it’s three times 640x480, I figured doing 1024x768 once wouldn’t hurt either
If you want to carry your Z buffer over from the 1024x768 viewport, one way to do it would be to grab it into a texture, and then use shadow mapping functionality to replace the depth test in the small view.
Does all that makes sense to you, or do you think I’m hitting my head against the wall ?
Maybe that z-only pass won’t cost so much … maybe grabbing the 1024 z-buffer would even cost less … I don’t have time to test, so if nobody tells me about a perfect (or quite) solution, I’ll go for easiest implementation, and that be the z-only pass in my case.
I have to say, 64x64 is going to produce quite bad image quality unless you have very heavy blur. At least antialiasing is required to make it look good.
I also do lots of fullscreen copyTexSubImages in my engine and it’s really not that bad, I suggest you at least try it. Just make sure your texture internal format is identical to framebuffer, maybe that’s the reason you’re getting bad performance.
I blur my images by copying the screen to a texture and blending (and/or multitexturing) slightly offset versions together in half resolution. Then copy the result back to texture and scale it to fullscreen.
JustHanging, I’ll check for that texture format. I’ll give a try on full screen copy, even if I feel like it will explode frame rate. And I’ll go for 128 or 256 texture if fullscreen is too slow, so that I have a correct blur.
Of course, all this, if time is here … and I can’t seem to find it these days.