How can I track the head position (relative from some absolute initial anchor position in space), or the head movement (like mouse movement) from the user wearing the Vision Pro with RealityKit?
My goal is to be able to have both other entities in space mimic the head movement, as well as emitting that data to physical hardware to mimic movement (e.g. camera), so I need to be able to track the position and angle that the head is looking, effectively getting the IMU data from the headset.
In visionOS 1.1, RealityKit's AnchorEntity(.head) can only help you attach models (with a desired offset) to a device world position. However, the transform matrix of
AnchorEntity(.head)
is hidden by framework's design. To expose Vision Pro's world transform, you need ARKit's DeviceAnchor. Here's the code: