IDL GLSL/Cuda/OpenCL Image Scale/Rotate/Translate?

I’m interested in writing an extension for use in IDL that will allow me to feed 3D data cubes 4096x4096xN (int16, int32, float32, float64) to a GPU and apply rotate/translate/scale transformations and then saving the cube back out as the same data type: 16 bit int, 32 bit int, 32 bit float, 64 bit float.

Questions:
Worth doing? Ie, is the overhead of copying the data over to the GPU, applying the transform and then reading back out going to negate any gains?

What toolkit should I use? Cuda? OpenCL? GLSL?

What stumbling blocks will I run into (linking into IDL)? Library linkage, symbols and dependency problems jump to mind…

Will GLSL munge my data so that I can’t get the same data values out?
Or, how do I properly get back the same data type and values?

I forgot to mention that I’m interested in preserving data integrity as much as possible, so that is why I am worried about using GLSL/textures.

I think you could implement such a library with any of these libraries and as much as I understand your needs, you shouldn’t worry about any data integrity issue with GLSL either. However, I would consider the following:

  1. CUDA - It is an NVIDIA proprietary stuff. It is in fact has more features than the other two but I would not consider this as a possibility if you want your library to work on any hardware that is not made by NVIDIA.

  2. OpenCL - I would recomment this, as it is a general purpose compute library that works on most platforms and vendors.

  3. GLSL - While it would be possible to use OpenGL for it, taking in consideration OpenGL is a rendering API that contains many functionalities you most probably won’t need for creating your library, I would most probably not go with it.

About the overhead… Well it is very difficult to make any predictions whether the added horsepower of GPUs would eliminate the overhead of copying the data back and forth. It depends on the complexity and computational cost of the transformations you would like to perform. If your transformations have really heavy arithmetic computational cost then most probably moving the calculations to the GPU will provide better performance but you need to benchmark it in order to know it for sure.

OpenCL is a good solution also for this, as OpenCL can be used to execute your calculations on the CPU, so actually you write your code once, and then you can test it whether it worths to use the GPU for it. If, for some reason, this is not the case, then you still can use the OpenCL implementation of your transformations to do the whole stuff on the CPU thus you don’t lose anything.

Well, I had been thinking of doing it with OpenGL, rendering to a texture target and then reading that back, since the transform and render hardware is there and fast…

Anybody have recommendations for simple, sample OpenGL code for rendering a transformed (scaled & rotated) polygon/texture to a texture target and reading it back at full data resolution, for say 32 bit floats?

A lot of these data sets (FITS images) are 32 bit floats and 16 bit and 32 bit ints, and occasionally 64 bit floats and 8 bit ints for use in scientific computing and analysis.

If I define a 2D float texture via:
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);

glTexImage2D(GL_TEXTURE_2D, 0, GL_INTENSITY32F_ARB, 4096, 4096, 0, GL_INTENSITY, GL_FLOAT, texData);

, will it preserve all 32 bits?
Ie, will the rotated/scaled/transformed “pixels” read back out to be the same values (assuming a mapping) without loss of precision?

Most often the transformations will be applied to 1 image at a time, but occasionally on cubes…

Also, I have been doing some digging for sample OpenCL code and it isn’t very abundant…

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.