I'm trying to use the attitude given by CMHeadphoneMotionManager to guide a camera inside a SKSceneView. If I'm not mistaken, they use difference reference systems, and so the direct initialisation of a float4x4 matrix like this one would not work without any permutation or change (axis do not match between CoreMotion and SceneKit).
To add some context, and I might well be wrong because I couldn't find the exact reference system, and had to run some tests to find out, the coordinate system used as reference by the attitude given by your Airpods (or motion-enabled headphones), goes like this, with positive Y pointing forward from your nose, positive Z pointing up against gravity through your head, and positive X pointing right (at a random direction picked when you start capturing motion): 
However, the reference coordinate system for SceneKit have positive Y pointing upwards, and positive Z pointing backwards (assuming you are the camera, which looks towards negative Z). Axis X seems the be the same:
My linear algebra knowledge at this level is sort of limited, and even though I've been trying for a few days, I don't know how to convert the given rotation matrix from HeadphoneMotion, to be used by the transform of a SceneKit camera. That would be the question. (Ideally, paired with the concepts behind the permutation of columns required, to learn how it's done.)
Also, I would like to avoid using eulerAngles or quaternions at this point.

After digging a bit more into matrices, I came up with a solution. Not sure if it's the right way, and not sure of the reasoning behind it (If someone knows and want to elaborate, please feel free).
Basically, assuming the schemes I drew of both coordinates systems are correct (specially the CoreMotion one), we can calculate a rotation matrix
Tthat separates both system; this is, one matrix that would turn one system into the other.By looking at those schemes, we know the SceneKit coordinate system is
90ºapart (on theaxis X) from the CoreMotion coordinate system, and the rotation matrixTwould be as follows (in swift, column by column):Knowing this, we can obtain
R', which is just the initial rotation matrixRbut in the target coordinate system, by doing this (and this is the key):R' = T^-1 * R * TWhich in swift, would be:
Just like that, the newRotation can be used as a transform in the SceneKit camera, and it works.