Renders improperly in compatible context but works in core context? Makes no sense.

So I’m trying to do gpu particles entirely on the gpu using transform feedback.

Spent a week trying to merge the example code into my project, until eventually finally isolating it into a test case.

Here is the comparison between the working tutorial/example code (linked below) and my recreation of it (using the same code, just different context)

On the left is the one that works (blending is off so I can see it, which is why it looks terrible). The one at the right acts like it’s just getting the same texcoords, or something.

So the glsl cookbook’s example uses GLSL 330 core using glfw and glLoadGen for the functors.

My recreation uses SDL 2 and glew, with glewExperimental set to true.

With my example, it looks like the version on the right when it is in compatible mode for a 3.3 context. As soon as I switch it to a core context of the same version, suddenly it works just fine?

This doesn’t make much sense to me…why would I absolutely have to use a core context for it to work properly?

When it is in core context though, the GL does throw an error of invalid enumerant from inside GLEW. But that’s because of the bad extensions/string query that most people know about (and apparently still isn’t fixed, wtf).

Here’s my code


I’m using NVIDIA 319.23 on linux if that makes any difference. I’m beginning to suspect that it’s a driver bug. That’d be just my luck, especially when being such a beginner, haha.

code taken from example chapter 09 smoke in the glsl cookbook.

Holy. Crap. GL_POINT_SPRITE: If enabled, calculate texture coordinates for points based on texture environment and point parameter settings. Otherwise texture coordinates are constant across points.

So that was it. It works now :slight_smile: