Hi folks! First things first, I’m a hobbyist developer working with a small team whose goal is to port a PS1 game to PC. Despite being quite new to OpenGL, the journey went fine until we have to deal with semi-transparency… In a nutshell, I’m struggling to find a decent solution both accurate in term of PS1 graphics and which wouldn’t be a terrible mess regarding OpenGL standards. Let me precise we work in C++, currently using SDL2 and GLEW as loading lib.
The game we’re working on is a Zelda-like 2D game using 4bpp indexed textures: each byte from a texture holds two pixels, one per nibble, getting their colors from a 16 colors palette. Also, the color format is 16bits per color, which means each RGB channel is encoded using 5bits, the last bit being used as a semi-transparency flag.
Like in OpenGL, PS1 graphics work with triangles → a rectangle area from a texture sheet is used to draw a quad onto a framebuffer. The thing is the global semi-transparency system can be set on or off before drawing any sprite.
The PS1 can choose among 4 modes of blending:
0.5f * B + 0.5f * F // Mean
1.0f * B + 1.0f * F // Additive
1.0f * B - 1.0f * F // Substractive
1.0f * B + 0.25f * F // WTF is this?
As you probably guessed, the main issue is each single new pixel to be drawn may need (or not) to be blended with the ones already in the framebuffer depending on the STP flag of its color and the sprite’s blending system being on or off.
The best would be to be able to fine tune these modes from inside my fragment shader. The problem is I get reading-writing artifacts whenever I try to use what’s in the framebuffer’s texture for the blending while writing to it…
I can share parts of my actual code if needed.
Thanks in advance!