My use-case is fairly straight forward. I need to create a collection of images, each image and the data that goes into creating it is entirely independent of the other images. I am using SkiaSharp to connect through to EGL and OpenGL.
I make sure to create the EGL context in an independent worker thread, and from that thread I create a pbuffer that I use to generate bitmaps. It all works fine, but when I monitor the throughput I can see that the GPU is being utilitized at most 40%.
Ideally I’d like to parallelize the generation of the images so that the throughput is optimized and the GPU utilization is increased.
In each of two threads I do the following:
- Generate an EGLDisplay (eglGetDisplay, eglInitialize)
- Choose a configuration (eglChooseConfig)
- Create an off-screen Rendering area (eglCreatePbufferSurface)
- Create a rendering context (eglCreateContext)
- Make the EGLContext current (eglMakeCurrent)
I then create a GRContext in Skia in the hosting thread so that SkiaSharp can reference the EGLContext.
When I run the app using two threads, each will generate valid contexts and surfaces, but when I have both threads issuing instructions OpenGL commands I get memory violations.
Am I correct in understanding that as long as in each thread an independent context is generated, an independent surface is created, and the context is made current and no state is shared with another thread doing the same thing, that the OpenGL calls will not affect each other?
I’ve hit a bit of a dead end with this. Any insights would be greatly appreciated.