Studying OpenGL coding optimization by exploiting GPU cores or thread

I’m studying about gpu algorithm(rendering) optimization. In my study, I require gpu core handling or get the access of gpu cores through my gpu program. For gpu programming I use OpenGL. That’s why I want to learn how to exploit multi-cores(manually) of gpu during my gpu programming.

Did you have a question? You didn’t really ask one. :wink:

If you had, it might have been:

  • How do I manually target work for individual cores of the GPU using OpenGL?

If so, take a skim through these tutorials:

In particular, note where these reference gl_WorkGroupID and gl_LocalInvocationID (as well as gl_GlobalInvocationID, gl_LocalInvocationIndex), as these provide the means for you to direct specific work to happen on specific threads within a thread block / workgroup in your GPU compute task.

This is about as close as you get to what I think you’re asking for. You don’t get addressability to specific cores within specific GPU compute units. But if you know much about how GPUs schedule work (lots of warps/wavefronts running in parallel on shared compute units, preempted on-the-fly as needed to hide memory access latency), then you might appreciate that you probably don’t really want this anyway.