Working on user position tracking in visionOS within Immersive Space. Any insights or tips to navigate this? Docs seem elusive at the moment. I searched and found queryPose but Xcode throws error.
struct ImmersiveView : View {
private let attachmentID = "viewID"
var body: some View {
RealityView { content, attachments in
if let fixedScene = try? await Entity(named: "ImmersiveScene",
in: realityKitContentBundle) {
let wtp = WorldTrackingProvider()
let session = ARKitSession()
let anchor = AnchorEntity(.head)
anchor.anchoring.trackingMode = .continuous
fixedScene.setParent(anchor)
content.add(anchor)
if let sceneAttachment = attachments.entity(for: attachmentID) {
fixedScene.addChild(sceneAttachment)
}
guard let env = try? await EnvironmentResource(named: "Directional")
else { return }
let iblComponent = ImageBasedLightComponent(source: .single(env),
intensityExponent: 10)
fixedScene.components[ImageBasedLightComponent.self] = iblComponent
fixedScene.components.set(ImageBasedLightReceiverComponent(imageBasedLight: fixedScene))
fixedScene.transform.translation.z = -1.0
fixedScene.transform.translation.y = 0.35
fixedScene.transform.translation.x = 0.25
anchor.name = "Attachments"
}
}
} attachments: {
Attachment(id: attachmentID) {
}
}
visionOS Camera Transform
Since the transform matrix of
AnchorEntity(.head)
is currently hidden in visionOS, use theDeviceAnchor
object from ARKit framework. For that, runARKitSession
object, create DeviceAnchor, then call theoriginFromAnchorTransform
instance property to get the 4x4 transform matrix from the device to the origin coordinate system.Now you are able to register the values of the transform matrix of the camera's anchor and transfer them to any entity in your scene using Timer updates (here 10 times per second).