Floating-point framebuffer + blending = undefined behavior?

Hello. I have a question.

But first, here is the context:
I wanted to render to multiple targets in my OpenGL application, where my second target is a floating-point renderbuffer (or texture) of internal format R32F. But I had a problem that rendering to that second floating-point attachment was not happening at all (by a call to glDrawElements, for example), but clearing that attachment was working perfectly fine (glCearBufferfv, for example).
Then, after some searching on forums, I found this thread that was similar to my problem and I found that the problem was that I had blending enabled and that was messing up drawing to the second attachment. I fixed the problem and now everything seems to work fine.
But still I have a problem: before disabling blending, on my desktop computer drawing to the second attachment was not working at all, as if it was blocked by a call to glColorMaski(1, ...) or by not specifying the second attachment in glDrawBuffers. I knew that it was not working at all, because I run the application tens of times in RenderDoc and I saw everything (not quite; limited by my knowledge) that was happening, and because every frame I was reading from that attachment with glReadPixels (because that was the application supposed to do). But on my laptop, glReadPixels was returning apparently undefined data. It was returning either the correct float values that was supposed to read and return, either some negative, random float, either the clear value, which is 0. On the desktop computer it was always returning the clear value. So, the exact same application, same code and GL_BLEND enabled was behaving differently, which seems to me undefined behaviour.
My question is: was it undefined behavior, or there was a bug in one of the OpenGL implementations, or something else? Why there were two results, if there was no UB?

Desktop computer: Ubuntu 22.04.1, OpenGL 4.3, NVIDIA GeForce GTX 1050, driver 515.65.01
Laptop: Xubuntu 22.04.1, OpenGL 4.6, Intel HD Graphics 4400 (HSW GT2), Mesa 22.0.5

I can show the mostly raw OpenGL commands I call, or any other details I know, if it helps. Hope I was clear enough.

Blending is supposed to work with floating-point colour buffers. It’s (signed or unsigned) integer colour buffers which don’t support blending.

You’ll need to show the commands involved in order for anyone to provide assistance.

This is how I create the framebuffer (tried to fill in some of the variables):

glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);

// First attachment
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);

glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA8, width, height);

glFramebufferTexture2D(
    GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + 0, GL_TEXTURE_2D, texture, 0
);

glBindTexture(GL_TEXTURE_2D, 0);

// Second attachment
glGenRenderbuffers(1, &renderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, renderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, R32F, width, height);

glFramebufferRenderbuffer(
    GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + 1, GL_RENDERBUFFER, renderbuffer
);

glBindRenderbuffer(GL_RENDERBUFFER, 0);

GLenum attachments[2] = {
    GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1
};
glDrawBuffers(2, attachments);

This is how I read the renderbuffer (in a PBO):

glReadBuffer(GL_COLOR_ATTACHMENT0 + 1);
glReadPixels(x, y, 1, 1, GL_RED, GL_FLOAT, nullptr);

Blending:

glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

Clearing:

glClearColor(0.1f, 0.1f, 0.1f, 1.0f);

// And every frame (clearing works):
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// ...
float clear[4] = { 0.0f, 0.0f, 0.0f, 0.0f };
glClearBufferfv(GL_COLOR, 1, clear);

One of the shaders:

#version 430 core

layout(location = 0) in vec3 a_position;
layout(location = 1) in float a_entity_id;  // Input

out flat float v_entity_id;  // Varying

uniform mat4 u_model_matrix;

layout(binding = 0) uniform ProjectionView {
    mat4 u_projection_view_matrix;
};

void main() {
    v_entity_id = a_entity_id;
    gl_Position = u_projection_view_matrix * u_model_matrix * vec4(a_position, 1.0);
}

#version 430 core

in flat float v_entity_id;  // Varying

layout(location = 0) out vec4 fragment_color;
layout(location = 1) out float entity_id;  // Output

uniform vec4 u_color;

void main() {
    fragment_color = u_color;
    entity_id = v_entity_id;
}

In RenderDoc I can confirm that both input attribute a_entity_id and varying v_entity_id had the correct float value. Only entity_id was not really working. Even if I set a constant value directly.
If you need more details, I’ll try to provide them.

The R32F should be GL_R32F, but I don’t see any issues other than that.

Have you checked for errors with glGetError or glDebugMessageCallback?

Oops, I filled GL_R32F wrong in the post.
I didn’t check for errors with glGetError, because I already have glDebugMessageCallback setup, GL_DEBUG_OUTPUT enabled and a debug OpenGL context.

And I disabled GL_DEBUG_SEVERITY_NOTIFICATION, but I don’t think that should be a problem.