I’m studying about gpu algorithm(rendering) optimization. In my study, I require gpu core handling or get the access of gpu cores through my gpu program. For gpu programming I use OpenGL. That’s why I want to learn how to exploit multi-cores(manually) of gpu during my gpu programming.
Did you have a question? You didn’t really ask one.
If you had, it might have been:
- How do I manually target work for individual cores of the GPU using OpenGL?
If so, take a skim through these tutorials:
- How to Use and Teach OpenGL Compute Shaders (Bailey)
- OpenGL 4.3 and Beyond (NVidia, Kilgard)
- It’s More Fun to Compute: An Introduction to Compute Shaders (Gerdelan)
- OpenGL Shading Language, ver. 4.60 (<-- The GLSL Spec)
In particular, note where these reference
gl_LocalInvocationID (as well as
gl_LocalInvocationIndex), as these provide the means for you to direct specific work to happen on specific threads within a thread block / workgroup in your GPU compute task.
This is about as close as you get to what I think you’re asking for. You don’t get addressability to specific cores within specific GPU compute units. But if you know much about how GPUs schedule work (lots of warps/wavefronts running in parallel on shared compute units, preempted on-the-fly as needed to hide memory access latency), then you might appreciate that you probably don’t really want this anyway.