I’ve come across a lot of talk about using OpenGL for 2D acceleration (Java 2D engine, ToonBoom Animate), but can’t find anything that explains the particulars.
How exactly do you accelerate drawing filled 2D bezier curves using OpenGL? Does this mean that programs are tessellating paths into triangle strips for each drawing operation? Or is the CPU doing the rasterization and just using OpenGL to composite 2D images?
I can’t think of any way to use OpenGL in a regular 2D pipeline that doesn’t involve swapping huge amounts of data to and from video memory every frame (which I would presume would negate the GPU advantage). Anyone know how this is supposed to work?
Most modern operating systems are actually using some 3D API for their default desktop these days, and Flash has been doing hardware accelerated 2D vector graphics for a long time, so from one perspective you can treat this as an already solved problem that you don’t need to worry about.
For your specific example, I would expect that a geometry shader is the optimal modern approach. This would keep the data submission low and accelerate all the calculations in hardware. So it’s not as far-fetched as it might sound.
All right, but how would it actually work? I’m playing around with some ideas for mixing 2D and 3D for a project, and it would be interesting to know exactly how much of this I can offload to the GPU.
Writing one shader that implements one 2D effect is pretty easy. Writing a tile-based pipeline that starts with Bezier paths and can apply layers of transforms and filters to them (like an SVG scene graph) is not easy. Knowing the tips and tricks that the professionals use to put 2D onscreen would help me in designing my own pipelines.