I’m looking to adapt a laser/plasma physics CPU-based raytracing capability to use the RT cores found on the latest discrete GPUs. The current code runs on HPC clusters with on 10-10k cpu cores across high-speed infiniband network.
Laser light propagation uses ray tracing that is slightly different than the standard computer graphics implementation. Rays are propagated through a tetrahedral or hexahedral 3D grid and can have parabolic trajectories (instead of straight lines). Ray & face (triangles or quadrilaterals) intersections are required as each ray moves through the grid. The domain typically does not have objects to reflect off from per se, but rather a gradient of plasma that curves each ray (who can deposit energy into the plasma)
How well (if at all) would Vulkan fit into this compute pipeline, since it doesn’t follow the typical game/cinematic use? So far, the only way I’ve found to specifically use the GPU’s RT cores is to use a library like Vulkan or Optix. I would prefer a cross-platform solution.
Hmm, GPU raytracing is biased towards the requirements of typical computer graphics applications. As such my understanding is that the hardware primarily accelerates ray triangle intersections and bounding volume hierarchy traversal.
Without knowing anything about your domain, one (perhaps naive) approach could be to give the faces of your tetrahedral grid to the hardware to intersect rays with. On every surface intersection you compute a new ray direction based on the plasma properties. Each of your rays would only travel very short distances. Also, the hardware strives on exploiting the coherence of (primary) rays where rays launched close to each other tend to take similar paths and I don’t know how true that is for your applications.
Vulkan is not a library it is a specification implemented by graphics hardware vendors as part of their drivers providing a portable API. OptiX is a Nvidia proprietary library providing a bit higher level API specifically for raytracing, it is only available on their GPUs.
It seems that some portion of the BVH traversal or intersection computation acceleration may help, but I would have to dig into some of the existing tutorials to see how well it would work in this domain. Unfortunately, rays can diverge considerably in some cases too. I recently saw how Optix can help accelerate some other compute-oriented workloads.
I’ll continue to dig around, but thanks for your response!
My first impression is that hardware-accelerated ray tracing won’t help a lot for your use case. Hardware-accelerated ray tracing, generally speaking, accelerates exactly two types of computations, which are:
Bounding volume hierarchy traversal (limited to AABBs and straight rays), and
Ray-triangle intersections (limited to straight rays, too).
So, if you say that you are dealing with parabolic trajectories, I don’t really think that hardware-accelerated ray tracing will help you a lot. Not even AABB-AABB intersections are supported (which I really hope to be added at one point in the future). Only AABB-straight ray intersections are supported currently.
Any other workload than these two is not hardware-accelerated at them moment.
Not sure exactly how you mean that, but if the engine/framework/tool you are using uses Vulkan internally to interface with a GPU, then it fits seamlessly into a “compute pipeline”—if we assume that this means that something is computed on a GPU with compute shaders. The hardware-accelerated features mentioned above can be used even in compute shaders through VK_KHR_ray_query.
It won’t get more cross-platformy as with Vulkan in terms of graphics APIs.