Problem with uniform blocks

Hi all.

I have a weird problem with uniform blocks:
This is my test app


// Triangle_opengl_3_1
// A cross platform version of
// http://www.opengl.org/wiki/Tutorial:_OpenGL_3.1_The_First_Triangle_%28C%2B%2B/Win%29
// with some code from http://www.lighthouse3d.com/opengl/glsl/index.php?oglexample1
// and from the book OpenGL Shading Language 3rd Edition, p215-216
// Daniel Livingstone, October 2010

#include <GL/glew.h>
#define FREEGLUT_STATIC
#include <GL/freeglut.h>
#include <iostream>
#include <fstream>
#include <string>

using namespace std;


// loadFile - loads text file into char* fname
// allocates memory - so need to delete after use
// size of file returned in fSize
const char* loadFile(const char *fname, GLint &fSize)
{
    ifstream::pos_type size;
    char * memblock;
    string text;
    
    // file read based on example in cplusplus.com tutorial
    ifstream file (fname, ios::in|ios::binary|ios::ate);
    if (file.is_open())
    {
        size = file.tellg();
        fSize = (GLuint) size;
        memblock = new char [size];
        file.seekg (0, ios::beg);
        file.read (memblock, size);
        file.close();
        cout << "file " << fname << " loaded" << endl;
        text.assign(memblock);
    }
    else
    {
        cout << "Unable to open file " << fname << endl;
        exit(1);
    }
    return memblock;
}

// printShaderInfoLog
// From OpenGL Shading Language 3rd Edition, p215-216
// Display (hopefully) useful error messages if shader fails to compile
void printShaderInfoLog(GLint shader)
{
    int infoLogLen = 0;
    int charsWritten = 0;
    GLchar *infoLog;
    
    glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLogLen);
    
    // should additionally check for OpenGL errors here
    
    if (infoLogLen > 0)
    {
        infoLog = new GLchar[infoLogLen];
        // error check for fail to allocate memory omitted
        glGetShaderInfoLog(shader,infoLogLen, &charsWritten, infoLog);
        cout << "InfoLog:" << endl << infoLog;
        delete [] infoLog;
    }
}

void reshape(int w, int h)
{
    glViewport(0,0,(GLsizei)w,(GLsizei)h);
}

int main (int argc, char* argv[])
{
    glutInit(&argc, argv);
    glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA);
    glutInitWindowSize(600,600);
    glutCreateWindow("Triangle Test");
    glewInit();
    GLenum err = glewInit();
    if (GLEW_OK != err)
    {
        /* Problem: glewInit failed, something is seriously wrong. */
        cout << "glewInit failed, aborting." << endl;
        exit (1);
    }
    cout << "Status: Using GLEW " << glewGetString(GLEW_VERSION) << endl;
    cout << "OpenGL version " << glGetString(GL_VERSION) << " supported" << endl;
    
    int l;
    const char *s = loadFile("sh", l);
    int fragmentShaderId = glCreateShader(GL_FRAGMENT_SHADER);
    glShaderSource(fragmentShaderId, 1, &s, &l);
    glCompileShader(fragmentShaderId);
    
    int programId = glCreateProgram();
    glAttachShader(programId, fragmentShaderId);
    glLinkProgram(programId);

    printShaderInfoLog(fragmentShaderId);
    GLint linked;
    glGetProgramiv(programId, GL_LINK_STATUS, &linked);
    if (linked == GL_FALSE) {
        // if a link error occured ...
        GLint logLength;
        glGetProgramiv(programId, GL_INFO_LOG_LENGTH, &logLength);
        if (logLength > 0) {
            GLsizei length;
            char *log = new char[logLength];
            glGetProgramInfoLog(programId, GLsizei(logLength), &length, log);
            cout<<log<<endl;
            delete[] log;
        }
        glDeleteProgram(programId);
        programId = 0;
        throw exception();
    } else {
        cout << "Linked
";
    }
    printf("%d
",glGetError());

    GLfloat pixels[4];

    glClear(GL_COLOR_BUFFER_BIT);
    
    glUseProgram(programId);
    glBegin(GL_QUADS);
    glVertex2i(-1, -1);
    glVertex2i(1, -1);
    glVertex2i(1, 1);
    glVertex2i(-1, 1);
    glEnd();
    
    glReadPixels(0, 0, 1, 1,  GL_RGBA,  GL_FLOAT, &pixels);
    printf("
 %f %f %f %f
",pixels[0],pixels[1],pixels[2],pixels[4]);
    
    return 0;
}

this is my fragment shader (written in the external file ‘sh’):


#version 330

uniform b { float u; };
layout(location=0) out vec4 color;
void main() { color = vec4(0.1, 0.45, 0.5, 0.0); }

and this is the output i get from the app:


Status: Using GLEW 1.7.0
OpenGL version 3.3.11631 Compatibility Profile Context supported
file sh loaded
InfoLog:
Fragment shader was successfully compiled to run on hardware.
Linked
0

 0.000000 0.000000 0.000000 0.000000

So the shader compiles and builds fine, glGetError() returns 0 but the glReadPixels gives me an empty (0,0,0,0) pixel instead of the value set in the shader.
But if i comment out the “uniform b { float u; };” in the shader it all works as expected and the pixel is (0.098039, 0.450980, 0.501961, 0.000000), or whatever value i put in the shader.
I’m really clueless, i hope you can help me.

I have a Radeon HD 4850, on Linux and catalyst 12.4. This is the output of my glxinfo.

I just ran Your example (unmodified), and got following output:


Status: Using GLEW 1.7.0
OpenGL version 3.3.0 NVIDIA 302.17 supported
file sh loaded
InfoLog:
Linked
0

 0.098039 0.450980 0.498039 0.000000

Edit:
I ran it again on laptop with AMD card (Radeon HD5650)
with following result:


Status: Using GLEW 1.7.0
OpenGL version 4.2.11631 Compatibility Profile Context supported
file sh loaded
InfoLog:
Fragment shader was successfully compiled to run on hardware.
Linked
0

 0.000000 0.000000 0.000000 0.000000


Edit2:
I modified Your program (added simplest pass-through vertex shader) and it worked on AMD:


Status: Using GLEW 1.7.0
OpenGL version 4.2.11631 Compatibility Profile Context supported
file sh loaded
file vertex.glsl.vs loaded
InfoLog:
Fragment shader was successfully compiled to run on hardware.
InfoLog:
Vertex shader was successfully compiled to run on hardware.
Linked
0

 0.098039 0.450980 0.501961 0.000000


You must specify vertex and fragment shaders in core profile context (You gave only fragment).
I don’t know if this is the case in compatibility profile.

We do not know version of GL context
requested by the program (glew should default to highest available, but I’m not sure).

btw. why did you use layout qualifier on fragment shader output?

You’re right, it was that. Thanks a lot! :smiley:
It’s strange though that the vertex shader wasn’t necessary without the uniform block.

btw. why did you use layout qualifier on fragment shader output?

I took that shader from an already written piece of code, and it was there.

You must specify vertex and fragment shaders in core profile context (You gave only fragment).

I read that and it perplexed me. As far as I read the spec that’s not correct. First of all, it is certainly incorrect that it is dependent on the kind of profile the context is created with. Second, the spec doesn’t seem to say that a program objects actually needs to have any shader objects attached to it and it doesn’t state that linkage would fail in such a case. What is said, however, is that if no valid object code for a stage exists, the program object will not be considered active for the stage it is missing a shader object for. It goes on to state that for a missing object for the VERTEX_SHADER stage (similarly for the FRAGMENT_SHADER stage) behavior will be undefined with that program:

I know that vendors tend to simply put out black as the fragment color (definitely the case for AMD and NVIDIA at least on Linux - probably on Windows too since most of the driver code is common) if there is no fragment shader in a program object. However, you don’t need a fragment shader to transform your vertices correctly. One usecase where you don’t need any fragment shader is if you simply want to generate depth values. You can use a program which only contains a vertex shader which correctly transforms your vertices. No fragment shading needed.

We do not know version of GL context requested by the program (glew should default to highest available, but I’m not sure).

GLEW is in no way responsible for context creation. In fact, you can’t even use GLEW without a valid GL context in the first place.

A vertex shader is a must when using core profile. A fragment shader is optional. So for shadow rendering and/or transform feedback, you need to have only a vertex shader.

A vertex shader is a must when using core profile.

But where is the piece of the spec that clearly states something like

? And why should it be profile dependent? I assume that if the program isn’t active for the vertex stage then vertex processing is handled by fixed-function or emulated fixed-function when using a compatibility profile. But that doesn’t change the tenor of the core spec. Just because there is sort of fallback to compensate for inactive shader stages with the compat. profile it doesn’t mean that the program objects are handled differently with respect to linkage and validity. What is does say is that behavior is undefined - which leaves defining what happens to you guys (@aqnuep). Right?

If I missed something in the spec please give me a hint.

It is poorly specified, but it is in the Core Profile spec. See these parts:

(Programs)
If UseProgram is called with program set to 0, then the current rendering state refers to an invalid program object, and the results of vertex and fragment shader execution are undefined.

(Program Validation)
Undefined behavior results if the program object in use has no fragment shader unless transform feedback is enabled, in which case only a vertex shader is required.

(Appendix E.2.2 Removed Features)

  • … fixed-function vertex processing … A vertex shader must be defined in order to draw primitives.
  • Fixed-function fragment processing …

On the other hand, in Compatibility Profile, I’d expect your example to work.

(Programs) If UseProgram is called with program set to 0, then the current rendering state refers to an invalid program object, and the results of vertex and fragment shader execution are undefined.

But my concerns aren’t about setting the current program to 0. It’s about using a program object, which has a non-zero handle and no shader objects stages and if that is permissible (in contrast to meaningful or wise) or not.

(Program Validation) Undefined behavior results if the program object in use has no fragment shader unless transform feedback is enabled, in which case only a vertex shader is required.

The thing is that the term “undefined behavior” doesn’t mean that it’s illegal. Does it imply that validation will fail and the program will not execute?

A vertex shader must be defined in order to draw primitives.

So it has been there and was changed from “absolutely necessary” to “undefined behavior”?

On the other hand, in Compatibility Profile, I’d expect your example to work.

Yeah, me too. Which is not surprising since replacing vertex and fragment processing is inherently not necessary since fixed-function or emulation thereof is guaranteed to be there to take over - at least that’s what I strongly assume.

I’d think that something as crucial would be defined as precisely and as unambiguous as possible.

[QUOTE=thokra;1239435]The thing is that the term “undefined behavior” doesn’t mean that it’s illegal. Does it imply that validation will fail and the program will not execute?[/QUOTE]“undefined behavior” means the behavior is undefined! Expect it to burn your house down etc (“undefined behavior” is a superset of all imaginable and unimaginable behaviors). “illegal” means is is not allowed and usually such things also have a behavior defined in spec if one tries to do the “illegal” thing anyway.

[QUOTE=thokra;1239435]I’d think that something as crucial would be defined as precisely and as unambiguous as possible.[/QUOTE]“undefined behavior” is very precise and unambiguous imho :stuck_out_tongue: - the behavior is undefined, don’t do that.


Not to imply that the spec is flawless.

I guess I’ll just check it out on all hardware available to me and see how the actual implementations deal with it.

We do not know version of GL context requested by the program (glew should default to highest available, but I’m not sure).

Sorry, I meant GLUT

btw. Is there a way to request specific context version (and profile) using glut?

Edit:
nevermind, I just asked uncle G:
glutInitContextVersion(…);
glutInitContextFlags(…);
glutInitContextProfile(…);

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.