I have been trying to understand image-based lighting specifically computing the prefiltered environment map. I have been following the paper by Brian Karis here and the article on learn OpenGL here. I understand the process for the most part except for the following two things.
Firstly - Why do we reflect the sampled vector calculated using importance sampling. Done using this piece of code
vec3 L = normalize(2.0 * dot(V, H) * H - V);
Where H is the sampled Vector and V is the view or normal vector for the sample. As far as I understand the prefiltered map calculates the light accumulation along the normal and is sampled in the PBR shader using the reflection vector. In that case, Why are the sample vectors reflected along the normal
Secondly, I do not understand how the article calculates which MIP level to use based on the roughness. THe code for it is
float resolution = 512.0; // resolution of source cubemap (per face) float saTexel = 4.0 * PI / (6.0 * resolution * resolution); float saSample = 1.0 / (float(SAMPLE_COUNT) * pdf + 0.0001); float mipLevel = roughness == 0.0 ? 0.0 : 0.5 * log2(saSample / saTexel);
Here pdf is the probability distribution function that is used to importance sample the sample vectors. Specifically I dont understand what saTexel and saSample are and how theya re being calculated