I am trying to emulate a spherical camera projection from inside a vertex shader. What I am really after is something like this: In a pathracer like Arnold/RIS, you would render a latlong image by using a spherical camera/projection. The idea behind this render being that for every pixel in the rendered image, compute a corresponding 3d coordinate based on spherical/polar math driven by the NDC coordinate. So this way I get a lat-long style renders of the entire scene with offline rendering.
I was wondering if this is possible in a GLSL/vertex shader context? I have been able to project the incoming vertices in spherical manner. But it truly is not a latlong image. I understand Vertex shader works only on the incoming vertices and post MVP I can get screenspace coords. But I will need access to entire screenspace coord and also the render resolution to be able to do what I am after, I suppose?
I know there are techniques like two pass, cubemapped render techniques but I am trying to avoid that and get the latlong projection via a vertex shader.
Can the GLSL gurus please help?