I think everyone of you knows Quake II´s underwater effect (Maybe Q 3 also has it, not sure). I would like to know, how this was achieved. I think it is possible with vertex-shaders, but there were no vertex-shaders at Q2´s time and i am not familiar with them.
So i would either like to know, how it can be done without vertex-shaders, or (even better) i would like to know, if anyone of you knows a good tutorial on them (i googled on it, but all i found were tutorials one had to buy on a cd).
It’s called “caustics”… normally pretty easy to do, and you don’t need vertex shaders. It’s generally an animated caustic texture map applied in a second pass over the geometry, with blending mode on, or/and some texture coordinates transformations. I don’t have an URL, but if you do a search on google, i’m sure you’ll find plenty
I dont remember Q2, but in Q1 they used a sine wave to distort the screen along x/y. It’s pretty easy in the software rendering days.
Nowadays you could do it easily with a dependent texture lookup (embm). First render your scene to an offscreen buffer, then render it to your front buffer with a bump map to move the pixels around a bit.
Well if you’re going via a texture you’d probably just want to use a poly mesh with a few sin waves on the x & y.
To answer directly, a vertex shader could do this. The quake software engine applied the distortion to the pixels, but since the hardware (OpenGL / MiniGL) version didn’t have this option Carmack distorted the vertex values (pre transform I think). A vertex shader could do better and distort post transform but there are issues with straight edge intersections and cracking because of a lack of subdivision (which could be seen in the OpenGL version of teh software).
Render to texture and drawing on an eye space mesh with some wobble would be the best option IMHO. EMBM could also do this but is a bit of a waste for a simple wave.
[This message has been edited by dorbie (edited 10-07-2002).]
i dont know what u mean by underwater effect exactly but
A/ volumteric fog simulates the underwater haze, also caustics textures give the ‘swimming pool’ effect eg http://uk.geocities.com/sloppyturds/caustics1.jpghttp://uk.geocities.com/sloppyturds/caustics2.jpg
B/ if u mean the screen looks like its distorting this is very easy to
basically what u can do is before u pass vertices to gl first apply some calculation to them. the calculation is up to you eg vertex += sin(distance from screen center)*T
Sorry, that i was not so precise, i didn´t know how to call it. Now i am not sure if it is “caustics” or something else.
What i meant, was the effect, that when you dive under water in Quake II the world around you gets streched and moves a bit around. It is the simulation of light, that is bent by the moving water.
Games tend to do some effects, that are not real. I never noticed this effect in reality, because it can only be seem, when you are OUTSIDE of the water (because the light gets bent at the edge between water and air). But this is very hard to do on computer hardware, so games use this effect when you are INSIDE the water, and apply it to everything (to all geometry, no matter if it is inside or outside the water), because this way it can be done.
BTW: The lensflare effect is the same. It does not work with human eyes, only with cameras, etc. however it is used in every game.
Hope you know now what i mean.
I think it is exactly that thing, which can be done with rendering the scene to a texture, applying this texture to a screen aligned grid, and moving this grid a bit around. Some of you mentioned this method, and i will try it today.
Thanks for all those links, some of this stuff is very interessting.
thats method B i wrote above. like i said its pretty easy to do, just modify each vertice before u send it to gl very fast even with 10,000s vertices (unless youre doing some major calculations).
(u could do this with a vertex program as well)
Then you’re looking for a distortion effect and you’d do it with either a vertex shader (or zeds geometry warp) or render to texture and draw the texture on a mesh. As I said, the problem with using distortion on the vertices is the poor quality due to a lack of geometry subdivision. You can see these problems in Quake MiniGL version. Back when Quake was ported to OpenGL there was no render to texture and copy or read commands were slow. Today these operations are viable on most cards and don’t suffer from the lack of subdivision of vertex based approaches.
[This message has been edited by dorbie (edited 10-09-2002).]