Does GLM support Arm Neon for matrix multiplications?

Greetings,

I’m trying to compile a glm based vector transform function that converts vertices to screen space. The compiler, an Aarch64 g++, does inline the matrix instructions, but it will not use neon instructions. I tried everything from O3 to GLM_FORCE_NEON define.

Any ideas?

Regards

void
transform( ViewData const & viewData )
{
   glm::dvec4 clipPosition = viewData.viewProjMat * glm::dvec4( WorldPosition, 1.0 );
   if ( clipPosition.w > 0.0 ) clipPosition /= clipPosition.w;

   ScreenPosition.x = float32_t( ( clipPosition.x * 0.5 + 0.5 ) * viewData.scale.x );
   ScreenPosition.y = float32_t( ( clipPosition.y * 0.5 + 0.5 ) * viewData.scale.y );
   ScreenPosition.z = float32_t( clipPosition.z );
}

GLM does have some Neon support; see glm/simd/neon.h and the various *_simd.inl files.

The first thing that I’d try would be compiling the file in question with the addition of “-E -dM” to see how the various macros are being defined, then with “-E” to see what’s actually being fed to the compiler.

I included several flags and it does seem to include some neon intrinsics, but not for 4x4 matrix-vector multiplication.

Is there any way to contact the glm authors on github?

I assume christophe lunarg is Christophe Riccio, right? Does he have an account here?

Used to be imported_Groovounet and Groovounet IIRC.

However, his last post was 3 years ago on imported_Groovounet, so he may or may not be monitoring site mentions here (e.g. via e-mail relay). You can try.

1 Like

Thank you.
@imported_Groovounet Can you help me with that? Regards.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.