I am testing one approach on various machines and on some of them it is causing weird behaviour - the app is getting stuck, the image blinks and sometimes hangs the GPU.
It’s a very simple app that consists of 2 threads. Each with a separate OpenGL context (shared).
In the main thread I simply render one textured quad on screen.
In the 2nd thread I upload textures in a loop (it’s actually a pool of N textures that I update in a ring buffer fashion).
What I am trying to test is the CPU-GPU memory throughput and see if this kind of way of streaming images to the GPU makes sense.
On some machines this works well. But why this might be an issue on some other GPUs?
Looks like something is overflowing or a GPU queue is getting full and can’t process the upload requests. But shouldn’t glTexSubimage2D block until the texture is ready on the GPU?
Actually the the worst behaviour I have seen was on M1 Mac, but it’s not that important, because I will have a Metal port, so I guess it’s just a problem with the OpenGL emulator on M1.
But I had also some problems on some portable AMD GPU in a small form factor NUC PC - I will check the exact model and driver version.
Don’t use the latest OpenGL version but I will try to see if it helps.
The thing is that I don’t even use the textures uploaded in the other thread. I just upload them and nothing more. The textured quad in the main thread is using a single texture created in the same thread.