I’m interested in how I can optimally implement a particle system on OpenGL ES 2.0.
I’m using a texture atlas to encompass all the images and I was wondering which is better:
- Use a single texture bind for the atlas texture and render all particles as triangles using the atlas information to send the UVs to the shader?
- Create 'glTexSubImage2D’s for all atlas images and bind for each particle whilst rendering using GL_POINTS?
I’m unsure what the cost is to continuously bind using glTexSubImage2D knowing the parent texture isn’t changing. Can someone please help me to understand what is going on under the hood?
Also, I’m not 100% if the glTexSubImage2D function happy deals with dimensions that are not powers of 2. I can’t see anything on the ES 2.0 reference.
I don’t program mobiles but an atlas is likely to be quicker if you can fit all your images into 1 texture since you are doing less OpenGL calls and both the triangle and point vertices are likely to all be available on the gpu so there won’t be a bus overhead for loading them. Using an atlas if a bit tricky if you want to do mipmapping.
bfg34hfgb, re your #1 choice, if you don’t lose anything in doing so, prefer solutions which results in fewer state changes (such as texture rebinds, texture subloads, etc.) – results in less overall CPU work and generally faster rendering.
So I would prefer texture arrays if you have access to them (simpler sampling and better filtering support), and texture atlases if not. That said, bench against separate textures and verify that you do actually get some benefit to justify the extra effort of using texture arrays or atlases. If you’re only binding a couple times, they may not be that relevant to your overall performance.
Create 'glTexSubImage2D’s for all atlas images and bind for each particle…to continuously bind using glTexSubImage2D…
I’m not sure what you’re trying to say here, but of course glTexSubImage3D doesn’t “bind” textures at all.
If your particle textures aren’t changing, you upload them to the GPU once and never upload them again. You can upload with the initial glTexImage2D or later by modifying the contents of an existing texture with glTexSubImage2D. However, there’s not much point in the latter if you know what the content of your textures is at startup (that is, unless you’re waiting to upload the texture data at render time for some reason and trying not to break frame).
Hello - yes I think I have terminology problems and some newbie misunderstandings of OpenGL. I do have good rendering experience with games consoles but it’s a been quite a while and I’m very rusty.
My goal was to try to get the GPU to do as much work as possible but I do seem to be restricted by ‘ES’ to what I initially had in mind. I would have liked to send only a list of centre points and have the shader iterate 6 times to build a quad from a single input vertex but I’m not sure what the equivalent is on OpenGL and if it exists in ES 2.0.
I mistakenly went down the path of glTexSubImage2D as I couldn’t think of any other way to render using GL_POINTS and customise generic UV offsets into my atlas in the shader. It seems like GL_TRIANGLES is the way to go and do the building of the sprites on the CPU.
However, you have answered my question and I’ll continue on my way… Much thanks.
have the shader iterate 6 times to build a quad from a single
The current generation of mobile devices do not support geometry or tessellation shader stages. I believe this will change in the near future but of course only for newer devices.
Thanks for the clarification.