Transpose PS1 semi-transparency system to OpenGL

Hi folks! First things first, I’m a hobbyist developer working with a small team whose goal is to port a PS1 game to PC. Despite being quite new to OpenGL, the journey went fine until we have to deal with semi-transparency… In a nutshell, I’m struggling to find a decent solution both accurate in term of PS1 graphics and which wouldn’t be a terrible mess regarding OpenGL standards. Let me precise we work in C++, currently using SDL2 and GLEW as loading lib.

The game we’re working on is a Zelda-like 2D game using 4bpp indexed textures: each byte from a texture holds two pixels, one per nibble, getting their colors from a 16 colors palette. Also, the color format is 16bits per color, which means each RGB channel is encoded using 5bits, the last bit being used as a semi-transparency flag.

Like in OpenGL, PS1 graphics work with triangles → a rectangle area from a texture sheet is used to draw a quad onto a framebuffer. The thing is the global semi-transparency system can be set on or off before drawing any sprite.

The PS1 can choose among 4 modes of blending:

0.5f * B + 0.5f * F // Mean
1.0f * B + 1.0f * F // Additive
1.0f * B - 1.0f * F // Substractive
1.0f * B + 0.25f * F // WTF is this?

As you probably guessed, the main issue is each single new pixel to be drawn may need (or not) to be blended with the ones already in the framebuffer depending on the STP flag of its color and the sprite’s blending system being on or off.
The best would be to be able to fine tune these modes from inside my fragment shader. The problem is I get reading-writing artifacts whenever I try to use what’s in the framebuffer’s texture for the blending while writing to it…
I can share parts of my actual code if needed.

Thanks in advance!

So the “semi-transparency” bit is on a per-fragment basis. Is the actual blending mode to be used specified per-draw call or per-triangle (or does the Playstation 1 even have “draw call” as a distinct thing)?

Each of those modes can be performed by OpenGL blending functionality. The difficult part is that OpenGL can’t directly turn that on or off on a per-fragment basis.

However, it can write an alpha value, which is used to do blending. So what you need to do is set up your blending mode such that there are two meaningful alpha values: one which makes blending irrelevant (ie: you just get the foreground color) and one which makes the PS1 blending mode work out.

Consider the additive blending mode. To allow the shader to turn it on/off with the alpha, you would implement that mode as source * 1 + destination * (1-alpha). The shader outputs an alpha of 1 to mean “no transparency” and an alpha of 0 to mean “transparency”.

“Subtractive” is also a problem. See, OpenGL can do a reverse subtraction operation (destination minus source). But you need to be able to make it revert back to just the source color if you don’t want subtractive blending for that fragment. And doing that with just alpha would require outputting a -1 for the source alpha value.

Which you can do if you’re rendering to a floating point framebuffer (or signed fixed-point). However, if you write to a floating point framebuffer, you’ll need a final pass at the end to convert it to fixed-point for display to the screen.

Even with that, you also need this alpha of -1 to cause the destination color to go away. Which… doesn’t work mathematically. Basically, you need two fragment shader outputs.

… which is actually a feature of any hardware that’s still being supported. Indeed, if you require dual-source blending, all of these become very simple: the source and destination colors get their own alpha outputs, so you can have the shader directly control both.

Indeed, with dual source blending, you can have your main code supply the shader with uniforms that specify which alpha values to use for “no transparency” and “transparency”. If the fragment is transparent, it outputs one set of values to the alphas; if it’s not transparent, it uses a different set.

But that subtractive one is a real problem, since it messes with how you have to render everything.

Thanks for your answer! Dual source blending looked like a pretty good idea I didn’t know about. So I tried it out: indeed works great except for the subtractive mode. The annoying thing is I can’t get the ‘full’ additive mode to work either. Actually, as I set the value for the second color (used as blending factor) to vec4(1.0, 1.0, 1.0, 1.0), it acts as if there’s no blending at all… or did I get something wrong?
I could try to render to a floating point (or signed) framebuffer the way you said as I’m already rendering using a final pass. But if I can’t get the second blending mode to work, it’s no use :confused:

Another focus track however: by looking at P.E.Op.S PS1 video plugin’s source code, I’ve seen regular OpenGL’s blending functions were used. Here’s roughly what it looks like:

struct STPflags
{
    GLenum  srcFac;
    GLenum  dstFac;
    GLubyte alpha;
};

// Globals
bool _STPOn;
bool _blendOn;
STPflags _blendMode;

STPflags STPModes[4] =
{
    { GL_SRC_ALPHA, GL_SRC_ALPHA,           127 },
    { GL_ONE,       GL_ONE,                 255 },
    { GL_ZERO,      GL_ONE_MINUS_SRC_COLOR, 255 },
    { GL_ONE_MINUS_SRC_ALPHA, GL_ONE,       192 }
};

void SetSTP(int mode)
{
    if (!_STPOn)
    {
        if (_blendOn)
        {
            glDisable(GL_BLEND);
            _blendOn = false;
        }

        _blendMode.alpha = 255;
        return;
    }

    _blendMode.alpha = STPModes[mode].alpha;

    if (!_blendOn)
    {
        glEnable(GL_BLEND);
        _blendOn = true;
    }

    if (STPModes[mode].srcFac != _blendMode.srcFac ||
        STPModes[mode].dstFac != _blendMode.dstFac)
    {
        if (glBlendEquation == NULL)
        {
            _blendMode.srcFac = STPModes[mode].srcFac;
            _blendMode.dstFac = STPModes[mode].dstFac;
            glBlendFunc(_blendMode.srcFac, _blendMode.dstFac);
        }
        else if (STPModes[mode].dstFac != GL_ONE_MINUS_SRC_COLOR)
        {
            if (_blendMode.dstFac == GL_ONE_MINUS_SRC_COLOR)
            glBlendEquation(FUNC_ADD);

            _blendMode.srcFac = STPModes[mode].srcFac;
            _blendMode.dstFac = STPModes[mode].dstFac;
            glBlendFunc(_blendMode.srcFac, _blendMode.dstFac);
        }
        else
        {
            glBlendEquation(FUNC_REVERSESUBTRACT);
            _blendMode.srcFac = STPModes[mode].srcFac;
            _blendMode.dstFac = STPModes[mode].dstFac;
            glBlendFunc(GL_ONE, GL_ONE);
        }
    }
}

So it may be possible that way. Except that:

  • P.E.Op.S isn’t specific to the game I’m working on so it’s not guaranteed to be a fine tuned solution
  • I’m not even sure P.E.Op.S would use that chunk of code in my case as it seems to go through different functions depending on the game
  • It still doesn’t say how to handle STP flags from each color