SSBO that works with intel's iGPU, not my GTX 970

I’m trying to create a raytracer with OpenGL’s compute shader, but the SSBO that’s supposed to receive the data doesn’t contain anything on my nvidia GPU, but it does on my laptop with an intel iGPU. I’m creating the buffer like this:

ssbo = glGenBuffer();
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo);
glBufferData(GL_SHADER_STORAGE_BUFFER, 0, (IntPtr)null, GL_DYNAMIC_DRAW);
glBindBufferRange(GL_SHADER_STORAGE_BUFFER, 0, ssbo, 0, 0);

Where data will be assigned later on with this:

glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo);
glBindBufferRange(GL_SHADER_STORAGE_BUFFER, 0, ssbo, totalSSBOSize, totalSSBOSize + vertexData.Length * sizeof(float));
glBufferSubData(GL_SHADER_STORAGE_BUFFER, totalSSBOSize, vertexData.Length * sizeof(float), vertexData);
totalSSBOSize += vertexData.Length;

The shader itself is called every frame with this code:

glActiveProgram(compProg);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, ssbo);
glBindImageTexture(0, texture, 0, false, 0, GL_READ_WRITE, GL_RGBA32F);

glDispatchCompute((uint)Math.Ceiling((double)Width / 8), (uint)Math.Ceiling((double)Height / 8), 1);
glMemoryBarrier(GL_ALL_BARRIER_BITS);

the shader is a simple way to test if there’s any values in the buffer, the screen is green if there is, red if there isn’t:

#version 460 core
layout (local_size_x = 8, local_size_y = 8, local_size_z = 1) in;

layout (rgba32f) uniform image2D img;

layout(std430, binding = 0) buffer Data
{
    float data[];
};

void main() 
{
    ivec2 texelCoord = ivec2(gl_GlobalInvocationID.xy);

    if (data.length() == 0)
        imageStore(img, texelCoord, vec4(1, 0, 0, 1));
    else
        imageStore(img, texelCoord, vec4(0, 1, 0, 1));
}

The screen is green with my UHD graphics, but red with my GTX 970.

You can’t create storage for a buffer of 0 bytes in size. Well, actually you can, but it doesn’t help.

Also, you cannot bind 0 bytes of storage to a range.

Lastly, it’s unclear what it is that the glBindBufferRange call here is intended to accomplish.

Either totalSSBOSize is misnamed, or you just bound a range starting from past the end of the buffer. Also, the last parameter is not a byte offset; it’s the number of bytes, starting at the given offset, that is made available.

Also, there’s no point in binding a range if you’re not going to use that range in a rendering operation. So calling this function serves no purpose in this instance.

Overall, I get the impression of a lot of copy-and-paste coding, without any real understanding of what these functions are intended to do.

I understand the confusion around totalSSBOSize since the name isnt entirely correct: it’s meant to represent the total amount of the SSBO thats already being used. This variable makes it so that the vertices of a second model doesn’t overwrite the data of the first model, that’s already in the buffer. The fixed version of glBindBufferRange should then be glBindBufferRange(GL_SHADER_STORAGE_BUFFER, 0, ssbo, 0, totalSSBOSizeUsed + vertexData.Length * sizeof(float));, this tells the buffer that it’s range is from 0 to totalSSBOSizeUsed + vertexData.Length * sizeof(float) (as far as I’m aware).

I’ve also given the buffer enough space for all cases, but this still brings the issue of it not working on my nvidia GPU.

Not according to the code you posted. Again: you gave it zero bytes of storage.

I should’ve been clearer: I added more space after your comment on it, I just didn’t show it in my response.