I guess most of you know the relief-mapping algorithm:

f2s main frag relief( v2f IN,
uniform sampler2D rmtex:TEXUNIT0, // rm texture map
uniform sampler2D colortex:TEXUNIT1, // color texture map
uniform float4 lightpos, // light position in view space
uniform float4 ambient, // ambient color
uniform float4 diffuse, // diffuse color
uniform float4 specular, // specular color
uniform float2 planes, // near and far planes info
uniform float tile, // tile factor
uniform float depth) // scale factor for height-field depth
{
f2s OUT;
float4 t,c; float3 p,v,l,s; float2 dp,ds,uv; float d;
// ray intersect in view direction
p = IN.vpos; // pixel position in eye space
v = normalize§; // view vector in eye space
// view vector in tangent space
s = normalize(float3(dot(v,IN.tangent.xyz),
dot(v,IN.binormal.xyz),dot(IN.normal,-v)));
// size and start position of search in texture space
ds = s.xydepth/s.z;
dp = IN.texcoordtile;
…

I’m just wondering about the depth parameter because I don’t get why it controls the height of the heightmap details. I.e. if you put in a terrain map, a bigger depth parameter will result in higher mountains etc.

This s.xy*depth/s.z seems like it’s the calculation of the parallax vector as described in Welsh’s paper (projection of the eye vector onto the surface) and you can scale it with the depth parameter. This means that smaller depth paramter = shorter parallax vector = smaller sampling stepsize. So how does this scale the heightmap at all?

The depth parameter determines how much to shift the texture coordinate with an angle change, what basically is similar to changing the height of the imaginary height map (no offset = plain surface, hight offset -> more surface height variety).

Yeah sure, but I don’t get how exactly this works. I.e. I also found this explanation:

So lets say my eye vector hits the top of a mountain. By scaling the depth parameter, the top of my eye vector remains at the top of the mountain (or what does he mean with height limit?) whereas it’s length increases backwards.

Given that, I have no idea why scaling the depth parameter has any effect on the surface height since the intersection point between eye vector and surface height profile always remains the same.

If you want to have a real parallax effect, you should perform a full surface VS ray test.

In the rough approximation you posted (I’m using a similar one), the actual intersection is not tracked, and the depth parameter just scales the offset we predict.

It does but it occurs later in the algorithm. Relief-Mapping uses a combination of linear and binary search to sample the height profile but I really don’t get how this depth param works and why my results look like how they do when manipulating this parameter.