Facing issues in updating opengl version

I am trying to update my OpenGL version to 4.0. Currently, I am using version 2.0 with GLSL ES 100 in my shaders (using varying, attributes, and gl_FragColor). To update my OpenGL version, I requested version 4.0 and core profile in my CPU like this:

QSurfaceFormat defaultFormat = QSurfaceFormat::defaultFormat();
defaultFormat.setVersion(4, 0);
defaultFormat.setProfile(QSurfaceFormat::CoreProfile);
defaultFormat.setSamples(4);

When I requested version 4, my software stopped rendering anything on the screen, showing only a blank screen. I could change the background color and other stuff, but nothing was rendering on the screen.

I then tried updating my shaders to version 400 using #version 400 and other changes like in and out, but that didn’t work either.

I thought my shaders were not rendering correctly, so I removed the request for version 4, and as soon as I did that, my shaders started working in version 400. I don’t understand what’s going wrong. Can anyone help me? Like what I am doing wrong in all these steps so I am not able to update my version.

Btw I am also planing to implement glbindings later, but at this step I feel like shaders should be working.

for reference, I am updating version in openchemistry GitHub - OpenChemistry/openchemistry: Supermodule containing submodules and external project to build all components

Are you examining the compilation and linking logs (glGetShaderInfoLog and glGetProgramInfoLog)? If the shader doesn’t compile, nothing is going to be rendered. Note that compilation and linking failure aren’t errors (in the sense that glGetError will report them), although draw calls will generate an error if the current program object is invalid.

Using a #version directive to specify a version greater than or equal to 1.5 will default to the core profile unless you explicitly specify a different profile. If you’re using deprecated features of GLSL, selecting the core profile is likely to result in compilation failure.

Yes, the shaders are compiled correctly and there’s no such issue in that. They are also linked properly. We have correctly handled those in our project.

This could be a problem. After requesting to version 4.0 and core profile I have been writing my shaders something like:

FRAGMENT SHADER:

#version 400
precision highp float;
uniform sampler2D u_texture;
in vec2 texc;

out vec4 outColor;

void main(void)
{
  outColor = texture(u_texture, texc);
  if (outColor.a == 0.)
    discard;
}

VERTEX SHADER:

#version 400 
uniform mat4 mv;
uniform mat4 proj;
uniform vec3 anchor;
uniform float radius;
in vec2 offset;
in vec2 texCoord;
uniform ivec2 vpDims;
out vec2 texc;
void alignToPixelCenter(inout vec4 clipCoord)
{
  vec2 inc = abs(clipCoord.w) / vec2(vpDims);
  ivec2 pixels = ivec2(floor((clipCoord.xy + abs(clipCoord.ww) - inc)
                             / (2. * inc)));
  clipCoord.xy = -abs(clipCoord.ww) + (2. * vec2(pixels) + vec2(1., 1.)) * inc;
}

void main(void)
{
  vec4 eyeAnchor = mv * vec4(anchor, 1.0);
  eyeAnchor += vec4(0., 0., radius, 0.);
  vec4 clipAnchor = proj * eyeAnchor;
  alignToPixelCenter(clipAnchor);
  vec2 conv = (2. * abs(clipAnchor.w)) / vec2(vpDims);
  gl_Position = clipAnchor + vec4(offset.x * conv.x, offset.y * conv.y, 0., 0.);
  texc = texCoord;
}

am I doing something wrong on the shaders part? Or it could be errors on my cpu side?

Nothing stands out. I’m assuming that the shaders worked before, and the only changes are replacing varying with in/out and replacing gl_FragColor with outColor. Have you tried simplifying the shaders to see if it will render something, even if it isn’t quite correct?

BTW, precision qualifiers aren’t relevant to desktop OpenGL; they’re permitted but ignored.

Yes, Actually I am working on more than 20 shaders, and I see nothing is rendered in the screen. I am not sure if it’s the problem in the shader part or in the cpu part.

also, I tried simplifying the shaders to see it will render something but it doesn’t. I am not understanding actually what’s going on. Sorry :sweat_smile:

What GPU and GL driver are you using?

What I gather from your post is that switching the GL context to “OpenGL 4.0 Core Profile” is what caused you rendering problems. Switching back away from that “fixed” the rendering problems. And forcing “#version 400” in the GLSL shaders specifically didn’t make any difference all. Is this correct?

If so, then it seems to suggest that something you’re doing on the CPU side is either invalid in the “OpenGL 4.0 Core Profile” or takes advantage of undefined behavior and needs reworked.

Have you tried changing your GL context to a “OpenGL 4.0 Compatibility Profile”? This would help you determine if you’re unknowingly taking advantage of some legacy behavior that the 4.0 Core Profile has removed. (… that is, assuming we’re not dealing with a GPU/GL driver limitation.)

Also, are you Checking for GL Errors? Either the Easy Way or the Hard Way? It could be GL is trying to tell you what’s wrong.

1 Like

Thankyou so much @Dark_Photon after some investigation It’s certainly possible that some of the other code is still using non-core GL code. Really thanks for helping me with it. :slight_smile:

As per your suggestions @Dark_Photon I added some checks, I am getting this error when I try switching the GL context to “OPENGL 4.0 Core Profile”. Do you have any idea on fixing it?

Also, I found a bunch of deprecated code including some line width and some glBegin / glEnd.

Sure. The first decision you need to make is whether you absolutely need the OpenGL Core Profile, or whether the OpenGL Compatibility Profile is sufficient. If the latter, then you don’t lose any of the old GL features, can most likely run your app as-is, and can gradually convert your code to use newer GL methods as time permits and as you learn more.

If OTOH you need the OpenGL Core Profile, then…

First things first. glBegin() … glEnd() is what’s termed OpenGL immediate mode. That’s definitely not going to fly under the Core Profile. You need to rework this to use vertex arrays in VBOs (vertex buffer objects).

It would help to see the full statement involving glVertexArrayPointer() to be sure. But this could be a couple things.

  • First, the OpenGL core profile (IIRC) got rid of the legacy vertex attributes. So for instance GL_VERTEX_ARRAY, GL_COLOR_ARRAY, etc. So if your code is using those, they’ll need to be reworked to use the generic attributes. This goes for your shaders too.
  • Also the core profile nuked what’s termed “client arrays”. That is, providing CPU pointers containing the vertex attribute data to glVertexAttribPointer() directly. So if you’re using those, that’ll trigger GL errors in this call. Instead, the Core Profile makes you create VBOs, upload the vertex data to them, bind those VBOs, and then point glVertexAttribPointer() to offsets in these VBOs. This is more complex, and it’s not guaranteed to be faster.
  • Also IIRC, the core profile makes you use VAOs (vertex array objects). IIRC, if you try to use GL without one bound, then the core profile throws errors.

And these are just the few “core profile” related things I’m aware of that might be causing glVertexAttribPointer() to throw errors whereas before it did not.

Bottom-line: trying to force the OpenGL Core Profile on a GL application written for the OpenGL Compatibility Profile is not something to be taken lightly. You have to read up on how this works and make the appropriate changes. It’s also worth carefully considering what benefit you expect to get from it. The Core Profile is not some magical feature that is guaranteed to net you better performance. Depending on your app’s GL usage, it may even net you worse performance.

1 Like

Yes, I understand it can be an overwhelming task for me. I am doing my gsoc project and I need to implement tessellation shaders and for that I need opengl 4.0 core profile. You are correct, we can use opengl 4.0 compatibility profile but IIRC we can’t use 4.0 features on mac without core profile.

And really thanks for your guidance on this. It really means a lot to me. Thankyou so much. You have made my work much easier.

Ah! I see.

I was going to say. In general, you don’t need Core Profile for Tessellation Shaders…
… but you’re saying you do need it for some Apple Mac specific reason.

That’s a pain.

(If it were me, I’d at least consider pushing the Apple box off into the ditch and using something that supports open standards better. A PC with Linux or Windows would give you better options here.)

Sure thing! I hope you find a solution that works well for you.

1 Like

IIRC: the Mac supports either 2.1 or 3+ core profile. It doesn’t support 3+ compatibility profile, so if you need to use features which aren’t in 2.1, you have to use the core profile.

2 Likes

Okay, I had a one last question.

Since gl_FragColor is depricated and can’t be used in core profile, I am defining a variable let say outColor and replacing it with gl_FragColor something like

#version 400
precision highp float;    
in vec4 vertex;

out vec4 outColor;

uniform mat4 modelView;
uniform mat4 projection;

void main()
{
  outColor = vec4(0.0, 1.0, 0.0, 1.0);
  gl_Position = projection * modelView * vertex;
}

Do we need to link outColor on the CPU (in C++ code) as well? If yes, how should it be done? If someone specifically changed the fragment shader’s gl_FragColor to use out, do we need to make any adjustments on the CPU side too?

The assignment to outColor needs to be in the fragment shader, as is the case for gl_FragColor.

You can associate fragment shader outputs with colour attachments using a layout directive, e.g.

layout(location=0) out vec4 outColor;

Alternatively, you can do it from client code with glBindFragDataLocation; this must be called prior to linking.

But if you’re only using a single colour attachment (i.e. replacing gl_FragColor rather than gl_FragData), it isn’t necessary to explicitly specify the attachment as any output variable defaults to attachment zero.

1 Like

Okay, currently I am focusing on tessellation shaders and after the implementation of it under 4.0 compatibility profile, I will be shifting to core profile.

as @Dark_Photon stated earlier

I am using Linux (Ubuntu), so I thought it would be easy to switch to a 4.0 compatibility profile and get things done. My mentor is currently working on CORE profile stuff. However, when I request a 4.0 compatibility profile, I end up getting a core profile instead. I checked the logs to see which profile I’m using, and it shows the core profile. Because of this, I’m getting the same errors as with the core profile, and nothing is rendering on the screen. Is there a different way to switch to compatibility profile or it’s my driver issue?

Is there any driver issue which you were talking about? or it could be something else?

here’s the output I am getting, logging the profile gives me core profile.

The graphics driver I am using is ( I don’t know if this is what I should be sharing)?

image

image

You posted 3 images. The first two are from a Windows system and indicate an NVIDIA chip. The third one is from a Linux system and indicates Intel integrated graphics. It does indicate support for the compatibility profile.

You are calling e.g. QOpenGLWidget::setFormat(defaultFormat) or similar, right? Simply constructing a QSurfaceFormat object won’t do anything by itself.

Do you mean setDefaultFormat instead of setFormat?

// Set up the default format for our GL contexts.
  QSurfaceFormat defaultFormat = QSurfaceFormat::defaultFormat();
  defaultFormat.setVersion(4,0);
  defaultFormat.setProfile(QSurfaceFormat::CompatibilityProfile);
  defaultFormat.setSamples(4);
#if defined(Q_OS_MAC) || defined(Q_OS_WIN)
  defaultFormat.setAlphaBufferSize(8);
#endif
  QSurfaceFormat::setDefaultFormat(defaultFormat);

  QStringList fileNames;
  bool disableSettings = false;
#ifdef QTTESTING
  QString testFile;
  bool testExit = true;
#endif
  QStringList args = QCoreApplication::arguments();
  for (QStringList::const_iterator it = args.constBegin() + 1;
       it != args.constEnd(); ++it) {
    if (*it == "--test-file" && it + 1 != args.constEnd()) {
#ifdef QTTESTING
      testFile = *(++it);

Sorry for the silly question… :slight_smile: That’s a bit of code. Can you help me understand what things i need to change in order to make opengl 4.0 CompatibilityProfile working?

Update: I got the solution. After adding this line my software works.

 defaultFormat.setOptions(QSurfaceFormat::DeprecatedFunctions);

in which it will allow us to use DeprecatedFunctions? Am I correct.

Either works. setDefaultFormat affects everything, setFormat affects a specific context.

I would have expected the posted code to work, but I’m not really that familiar with Qt (the last time I used it was over a decade ago, Qt 4.x). You’re more likely to get reliable answers from a Qt forum.

1 Like