I can’t seem to find any information on this online.
You cannot get answers to these questions online because the answers are kind of obvious once you understand what shaders do. I would advise you to focus less on trying to make shaders fit your idea of them and focus more on understanding what they do.
Does any type of shader remove the fog property of any object using it? So if I have a scene using glFog and if I have one object using a simple shader, it seems to loose the fog effect and become totally visible even when other objects aren’t.
Broadly speaking, you should not be trying to get fixed-function operations like
glFog to work with shaders. You can, but you really shouldn’t.
Fog is a part of fixed-function per-fragment processing. So if a fragment shader is active, you lose all fixed-function per-fragment processing. However, the fixed-function fog computations are ultimately based on values computed in vertex processing. So if a vertex shader is active, that overrides all fixed-function vertex processing. So you lose your fog coordinates.
You have two options, but both are different spellings of the same thing. You can read the OpenGL compatibility specification, see how it computes fog, and implement it yourself by using the specific compatibility profile values. That is, VS’s get a
gl_FogCoord that’s based on something generated from
glFog values (I don’t remember how fixed-function fog works). You would use that and the
gl_Fog built-in uniform struct to do the same per-vertex fog computations that the fixed-function pipeline would have done. The output would be written to
gl_FogFragCoord. And in your fragment shader, you take
gl_FogFragCoord and the
gl_Fog parameters to do the same per-fragment fog computations that the fixed-function pipeline would have done.
Alternatively, you can implement it entirely yourself. That is, work out a scheme for implementing fog, and then have you shader compute the fog value and incorporate that into your lighting model.
Either way, you’re doing the work yourself. It’s simply a matter of whether you’re trying to work with the OpenGL model or freeing yourself from what OpenGL wants and fulfilling your needs.
If I have multiple shaders for one object, how can I accomplish this without joining all the effect ins one, thus adapting the code.
Broadly speaking, you don’t. You might be able to build some kind of system based on SPIR-V or something, but shaders are not things you just paint on an object.
If I’m writing a light-based shader is there any way to access the light sources and their exact position/intensity/fading properties without the need to pass them as parameters?
… huh? A shader cannot access anything without passing it as a “parameter”. Shaders execute in a very limited environment; they only have access to the information you give them.