finally, after moving through many languages/frameworks I decided to implement my idea using only OpenGL.
My idea is a simple engine for 2.5D games (such old Alone in the dark, resident evil etc…), with:
- 2D backgrounds
- 3D actors/objects
- static cameras
Some months ago I completed my own 2.5D engine (actually a framework, with scene scripting and many more) using JMonkey engine, it works well, but I always had in my mind to create a tiny and faster version of this engine in pure OpenGL.
What I do in my current engine (JMonkey) is:
- Create a blender scene (high detail) and render in different layers.
- Create a very low poly blender scene only for collision, cameras and floor sectors.
- Import the low poly scene and cameras settings.
Here each layer has a fixed depth, and actors receive depth from sector where are in. At this point actors and
layer are enqueued and rendered with right order (depth testing disabled).
I tried to recreate this prototype in OpenGL and it works, it was simpler than I thought.
Now… this method for 2.5D is working well, and I think is the old school way to do it, but I would to implement it in a different (and modern) way now, my idea is:
- Render only one image from blender (to use as background) and NOT different layers.
- Render from the same camera a grayscale image to use as depth mask.
- Use always low poly scene for collisions and cameras (for cameras I already tested in OpenGL and it’s ok, for collision I will think later )
The question is… how to do it? The flow logic I think it’s simple (correct me if I’m wrong):
- Render a textured quad with the background
- Fill the Z-buffer with grayscale depth mask.
I’ve problem with second step. As I understood (I search for a month for a tutorial about this but no results found)
there are two ways for doing this (always correct me if I’m wrong)
- Create a quad over the camera (so no MVP transform for it), and in the vertex shader displace vertex according to depth texture.
- Fill directly the z-buffer with depth values from texture (fragment?).
I tried with first solution without success, but if it’s possible I would like to proceed with second solution, but the problem is…
how to do it? And how to adjust depth values according to near/far plane? It it possible to do it in a smart and flexible way?
I’m still studying OpenGL, so I apologize in advance for any stupid question.
I hope everything it’s clear.
Thank you very much