Hey people, I’m currently working on a little opengl experiment, and I could use some help on this. I am trying to synchronize with the refresh rate in a somewhat unorthodox way and I’m not sure if it’s possible since vsync seems to be this extremely rigid and hardcoded thing.
So here’s what I’m trying to do: I have this frame in the frame buffer that I don’t want displayed until the next screen refresh. However I want to be able to keep accumulating samples into the buffer while waiting for it to swap.
As you may have guessed the idea is to create some kind of antialiasing that would give you the best possible quality, whitout skipping any screen refreshes.
I’m not really an advanced Opengl coder but I’m posting this on this forum because this problem does seem like an advanced one. Sorry for the newbieness if I’m mistaken.
Sounds like a good idea.
Where exactly do you need help?
If vsync is turned on the application waits for the VGA refresh before swapping buffers. Lets say your card runs with 72 Hz, that will give you 13.9 ms to render your frame.
If you measure how long the rendering takes (for example 6.0 ms) you know you can render a second pass for accumulation purposes and still make it in time to the next refresh. If the rendering took only 2.0 ms you can render 5 more passes in time… etc
If your rendering times vary alot I’m sure the change in “quality” from frame to frame will be more disturbing than an overall antialiasing improvement.
If I understand correctly, your engine runs in much higher fps then monitor refresh rate, and you want to mix 2 or more images in final framebuffer and display result on vsync.
I don’t thik it is good idea because you’ll get doubled/tripled final image.
Try to do something like changing fps in AVI file using your techinque (for example change 25fps to 20fps) and result will be bad.
Even more… try to combine two succession frames in fast action game…
Thanks a lot for the help def, I hadn’t thought of measuring rendering time, and that sounds like the best idea, also it has the advantage of actually being doable unlike what I had in mind .
yooyo, I find it interesting that you predict bad results, I actually think that this method would result in much higher quality.
If I use this for spatial anti-aliasing, then I don’t see how the results could look worse than without it. As def pointed out there might be problems if the rendering time varies a lot; but this usually only happens when things on screen are moving, and it’s harder to actually notice aliasing artefacts on moving objects.
If I use it for temporal AA, then the results would be even better since it would produce true natural motion blur. Movies taken with a camera can run at 24fps because a single frame is an integration of the scene over an interval of time instead of on a single point in time like for video games.
If you want real motion blur you have to render scene N times and for each render pass you have to slightly move & rotate objects on scene. Then accumulate all pases in one final frame. For better result N should be larger than 10.
So… your idea can work only if your engine runs 10+ times faster than refresh rate.
Camp Koala is talking about supersampling his scene, probably by jittering the viewing volume and using an accumulation buffer. No motion blur…
I would be interested to hear how it worked out Camp Kaloa.
yooyo, I know that I need to render a lot of samples per frame to get either good AA or good motion blur, but the idea here is to synchronize with the refresh rate so that I get exactly as many samples as possible whitout skipping frames. This way if the engine cannot render enough samples, it won’t.
I’ve implemented your suggestion def, and everything works super fine now . I’ve tried it at different refresh rates and tried artificially slowing down the engine, everything is smooth.
Also actually I am not using my “graceful degradation supersampling” for anti aliasing yet; at the moment I am using it for motion blur by recalculating the scene for each sample (I must have spent half an hour zooming past boxes staring at the effect ).
I do plan on adding camera jittering to the samples to allow for anti aliasing probably today, that shouldn’t be too difficult now.
Also, by changing the angle of the camera in addition to jittering, I would get a depth of field blur effect. Isn’t that great? I get motion blur; AA; and depth of field in one package, all while preserving frame rates!
I’d love to show off a demo but I don’t have any hosting space yet…
Anyways thanks again for the help guys, this forum is great, I’ll try to come here more often.
Depth of field ‘effect’ certainly, only in that the pixels around the rotation point will be blurred, which isn’t anywhere near proper depth of field…but I get what you mean
By the way, what timing method are you using?
Also, you’ll have to issue a glFinish to ensure your timing of a render pass is accurate, which then introduces latency and will impact on your overall performance. If you don’t issue a glFinish all you’ll be measuring is how quickly the driver has sent the commands to the card, not how quickly it took to execute the commands.
Still, if you say it works then I guess it works.
I’m calling a GetFrameTime function in my rendering function. This function gives the time since the last time it was called using GetTickCount.
Here’s a bit of code that’s in the end of the rendering function, after everything is done in the back buffer, it probably explains it better than I would:
glutSwapBuffers ( );
I have absolutely no idea wheter this is the best way to do it or not but as you said it if it works it works
And you’re right about the depth of field thing, changing the angles won’t suffice, I’d probably need to make the jittering a lot more pronounced than for AA and have the angle for each sample converge towards a focus point
I maybe wrong (it’s been a long time since me and gettickcount had a dance) but I think the minimum elapse you can measure with GetTickCount is 10 milliseconds on MSWindows, unless you call timeBeginPeriod at your application startup.
You should really use queryperformancecounter.
Also, I think I’m right about the glfinish point, so if you’re observing your technique working then you’re not observing it correctly
As far as depth of field is concerned, this is not an effect observed around a single point, it is around a distance-from-viewer, a focal plane you might say. So don’t bother persuing your approach as it will be totally wrong, may look weird and wacky but won’t be depth-of-field.
Finally, you seem to be using the accumulation buffer, which has wildly varying performance implications on different cards. Best to avoid it, it’s essentially a legacy gl feature.
try timeGetTime() instead of GetTickCount
also queryperformancecounter aint good cause its unrielable with cpus speed changing all the time (throttling) eg in notebooks and new desktop cpu’s.
why (they) didnt design the whole timing thing better is beyond me, its not like its a esoteric query
Here are a few screenshots:
This is a screenshpt of the motion blur (some boxes rotating really fast around the viewer), I am still not using Glfinish since it really looks correct to me, but as I said before I am really a newbie and I may be mistaken. I tried adding GLfinish at the end of my rendering instructions for each sample, just before GetFrameTime(). But I didn’t notice any difference
Also, I don’t know if it is noticeable but I changed the code so that a little bit of the previous frame gets blended with the current one (like for “cheap” motion blur), I found that it made things look smoother when the rendering is slowed down and only a few samples can be taken each frame.
Here is the depth of field screenshot:
You are right again about the DOF, knackered, I realised that I didn’t understand fully how it worked when I re-read the passage in the OpenGL Programming Guide since you also have to change the view frustum. The technique I am using now is correct however I think (the one from the book).
Finally Here’s a screenshot showing AA when depth of field is disabled: http://i20.photobucket.com/albums/b205/Camp_Koala/aa.jpg
(you can see that the jittering values are a bit strange and only temporary…)
I’m using timeGetTime now, but didn’t notice any difference whatsoever (with timebeginperiod at resolution 1). I also tried the performance counter, but what I got was some really strange noise on the screen, wich is strange(I don’t see how an error in measuring time intervals could result in a black screen with hypersaturated random pixels)
Also knackered, I was using the accumulation buffer because I saw it used everywhere and tought “hey, if they have a buffer made only for accumulating frames, then it must be the way to go!”. But I don’t have the tiniest clue about whether one way of doing things is faster than the other or whatever. What are you suggesting I use instead?
Good work, I love that ! I am really fond of the idea of motion blur instead of more-than-refresh-rate fps.
Instead of accum buffer, you may use pbuffers and render to texture or texture copy (I tried both and others, check my homepage http://www.chez.com/dedebuffer/ ). Then blend textures.
A new extension exists to replace ugly pbuffers, framebuffer_object :
Did you think about using the multisample antialiasing features of modern GPUs? It won’t help you with motion blur and depth of field but antialiasing ist possible. See the GL_ARB_multisample extension or the corresponding chapter of the GL 2.0 specification
I think he’s looking for free antialiasing, using up any spare gpu cycles each frame - multisampling is not free, and can’t be dynamically enabled/disabled on a per frame basis.
It’s a shame there isn’t a gl mechanism to do this automatically - a kind of automatic lazy antialiasing - when you hit the swap buffers with vsync enabled the driver could do what Camp Koala is doing.