Some questions about GLSL and it's implementation on code

I don’t have many technical words to explain my problem, but I will try to explain it in a simple way.

I started studying OpenGL (OpenGL ES itself to achieve something more multiplatform, but that is not the point) and GLSL (GLSL ES), I have some doubts regarding the use of it in C ++ code.

I’ve been looking at some codes on GitHub on how to use shaders in a graphic library like SDL2 and I came to understand just a little bit by looking (how to bind a texture,etc), I haven’t practiced or tried it yet.

At first to study and understand the basics of GLSL, I am using some apps from Google Play that can simulate GLSL code, like the “Shader Editor” by Markus and other GLSL editors.

My questions are:

  • When to discern and use .glsl or .vert and .frag? What is pixel shader (.ps?)? When to use pixel shader? Why do some editors only ask for vertex? Why do others ask for vertex and fragment OR vertex and pixel? Is there any way to combine the two into one? I read something about fragment () {}, would it be okay to do this?

  • Sometimes in some GLSL editors on my Linux desktop, I get for example: gl_Vertex undefined, and I find people using it normally on the internet.
    Is there any difference between the implementation of a GLSL editor for some OpenGL function and the OpenGL API?

  • I tried and managed to make a GLSL read a sampler2D / texture2D, but the texture (256x256 & 1024x1024) came out completely pixelated, why?

  • Why in some examples on the internet using GLSL web editors (WebGL reading GLSL?), Do 3D models appear as shaders? I don’t see the loaded objects but just code, where is the 3D object?

  • Illumination is a OpenGL implementation or GLSL implementation? How to use a shader globally to every object/map? ( I have a doubt about this cause on 3D modellers as Blender, it act like a object and I can move it )

Where can I really learn about OpenGL and GLSL in a clear and error-free way and without confusion of implementations or function errors? It seems so easy to learn and at the same time so difficult, confusing and complicated.

Sorry for too much questions about it, I wanna learn this. xD
Graphics are very awesome.
I’m not a professional on C++, I’m studying it too.

You’ll answer most of this yourself soon with a bit more reading on OpenGL (GL) and OpenGL ES (GL-ES).

  • Shaders define code that runs on the GPU.
  • Shader Stages
    • These are places in the rendering pipeline where you can “plug in” shaders.
    • One is Vertex.
    • One is Fragment.
    • Vertex shaders run once per vertex.
    • Fragment shaders run once per fragment (which is usually per pixel).
    • “Pixel shader” is Direct3D parlance for “Fragment shader”.
  • Shader Stage Permutations
    • In the olden days, there were no shaders so you didn’t need to plug in any. The OpenGL driver built and handled these behind-the-scenes (if needed).
    • This “behind-the-scenes” behavior is referred to as the legacy “fixed-function pipeline”.
    • A bit later, you could plug in your custom vertex shaders, but at the same time use the “fixed-function” (implicit) fragment shader generated by the driver.
    • Nowadays, you’d just plug in both custom vertex and a fragment shaders, and not use the “fixed-function pipeline” shaders at all.
    • The old fixed-function pipeline is still available in the GL “compatibility profile”, but not in the GL “core profile”.
  • Shader Programs
    • In the GL (and GL-ES) API, you provide GLSL source for 1 or more shader stages separately.
    • After using GL to compile them, you then tell GL to “link” these into a single shader program.
    • This is what’s run on the GPU.
    • So from GL’s perspective, these shaders are combined into one program before use (typically).
    • However, they’re often authored and defined separately because they are compiled separately and have their own interfaces.

GL doesn’t deal with files or the file system, so it doesn’t care. Use whatever filenames and extensions you want (if any).

That said, file extensions (especially on Windows) are often used by program launchers and text editors to try and infer what format the data in the file is (to launch the right program to execute/interpret it, or provide the correct syntax highlighting). But this is a function of those launchers and editors, not GL.

As far as what extensions are commonly used, I’ve seen “.vert” and “.glslv” used for GLSL vertex shaders, “.frag” and “.glslf” used for GLSL fragment shaders, “.glsl” used for GLSL source for some shader stage, etc. But there’s no official standard.

Where you see “.ps” (pixel shader) is more often in the Direct3D world. Sometimes it contains pixel shader (aka fragment shader) source, but in a low-level assembly language rather than GLSL.

  • In the olden days, gl_Vertex was used to pull in vertex position attribute values in the vertex shader, which were set on the C++ side via glVertexPointer().
  • Nowaday’s you’d define your own custom vertex shader inputs with your own custom names, and populate them with glVertexAttribPointer().
  • gl_Vertex and similar olden-days GLSL built-ins are still available in the “compatibility profile” but not the “core profile”.
  • You’d explicitly use the “core profile” if you want the implementation to prevent you from using “the old stuff” (e.g. legacy vertex attributes, fixed-function pipeline, etc.).

Not near enough info provided to know. Break this out into a separate forum thread, and provide a lot more info.

You need to ask the tool authors or read their docs. There is definitely an object being shoved down the pipeline (comprised of vertices and primitives), to which your shaders are being applied. But whether or not they expose that object definition to the user is up to their tool.

  • In the olden days, “OpenGL implementation” in the “fixed-function pipeline”.
  • In modern days, user code often written and provided in “GLSL shaders”.

There are more, but here’s a starter list…



1 Like

Thanks for the tips! I was thinking how can I implement one shader by object, all I can find is glUseProgram but it just change the shaders. How can I do that: same screen/same renderer/same window, two objects, two different shaders at same time, something like character’s skin shader and sword shader.
There is somewhere I can read about it?
If not thanks anyway and thanks again! :smiley:

Only one shader program can be active at a time. So you can either implement this as:

  1. Separate draw calls each rendering with their own specialized shader, or
  2. 1 or more draw calls rendering with a single “unified” shader that can shade both appearances.

If you’re just changing things like transform, texture, and material attributes between objects, that can easily be done using the same shader and even within the same draw call.

Where’d you’d want to consider breaking shaders is if they use big blocks of largely distinct transform or shading logic to render them. While you can often unify these into one big ubershader, there’s a performance cost for this because the register footprint for the unified shader could be much larger than for each specialized shader individually. This can reduce the number of shader instances that can run in parallel on the GPU and so potentially reduce performance.

1 Like

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.