I have a 3D interactive globe built with SceneKit where countries are represented with dots. The function below takes a position and animates the camera to it.
If the user does not interact with the globe, then I am able to continuously call the function and animate the camera to the new position.
However, if the user performs any gesture on the scene, then the camera animation doesn't work.
A solution found in a different SO thread (linked below) used the line sceneView.pointOfView = cameraNode at the beginning of the function.
This did solve the issue of the camera not animating after a gesture.
However, this line causes the globe to reset to its original position before animating. I have been trying to figure out a way to bypass this scene reset, but have had no luck.
I assume performing a gesture on the globe creates a new point of view for the scene and overrides the camera's point of view. Therefore, setting the scene's point of view back to the camera before the animation resolves the issue.
import Foundation
import SceneKit
import CoreImage
import SwiftUI
import MapKit
public typealias GenericController = UIViewController
public class GlobeViewController: GenericController {
var nodePos: CGPoint? = nil
public var earthNode: SCNNode!
private var sceneView : SCNView!
private var cameraNode: SCNNode!
private var dotCount = 50000
public init(earthRadius: Double) {
self.earthRadius = earthRadius
super.init(nibName: nil, bundle: nil)
}
public init(earthRadius: Double, dotCount: Int) {
self.earthRadius = earthRadius
self.dotCount = dotCount
super.init(nibName: nil, bundle: nil)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
func centerCameraOnDot(dotPosition: SCNVector3) {
sceneView.pointOfView = cameraNode //HERE RESETS
let fixedDistance: Float = 5.0
let newCameraPosition = dotPosition.normalized().scaled(to: fixedDistance)
let moveAction = SCNAction.move(to: newCameraPosition, duration: 1.5)
let constraint = SCNLookAtConstraint(target: earthNode)
constraint.isGimbalLockEnabled = true
sceneView.gestureRecognizers?.forEach { $0.isEnabled = false }
SCNTransaction.begin()
SCNTransaction.animationDuration = 1.5
self.cameraNode.constraints = [constraint]
self.cameraNode.runAction(moveAction) {
DispatchQueue.main.async {
self.sceneView.gestureRecognizers?.forEach { $0.isEnabled = true }
}
}
SCNTransaction.commit()
}
public override func viewDidLoad() {
super.viewDidLoad()
setupScene()
setupParticles()
setupCamera()
setupGlobe()
setupDotGeometry()
}
private func setupScene() {
let scene = SCNScene()
sceneView = SCNView(frame: view.frame)
sceneView.scene = scene
sceneView.showsStatistics = true
sceneView.backgroundColor = .clear
sceneView.allowsCameraControl = true
sceneView.isUserInteractionEnabled = true
self.view.addSubview(sceneView)
}
private func setupParticles() {
guard let stars = SCNParticleSystem(named: "StarsParticles.scnp", inDirectory: nil) else { return }
stars.isLightingEnabled = false
if sceneView != nil {
sceneView.scene?.rootNode.addParticleSystem(stars)
}
}
private func setupCamera() {
self.cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(x: 0, y: 0, z: 5)
sceneView.scene?.rootNode.addChildNode(cameraNode)
}
private func setupGlobe() {
self.earthNode = EarthNode(radius: earthRadius, earthColor: earthColor, earthGlow: glowColor, earthReflection: reflectionColor)
sceneView.scene?.rootNode.addChildNode(earthNode)
}
private func setupDotGeometry() {
let textureMap = generateTextureMap(dots: dotCount, sphereRadius: CGFloat(earthRadius))
let newYork = CLLocationCoordinate2D(latitude: 44.0682, longitude: -121.3153)
let newYorkDot = closestDotPosition(to: newYork, in: textureMap)
let dotColor = GenericColor(white: 1, alpha: 1)
let oceanColor = GenericColor(cgColor: UIColor.systemRed.cgColor)
let highlightColor = GenericColor(cgColor: UIColor.systemRed.cgColor)
// threshold to determine if the pixel in the earth-dark.jpg represents terrain (0.03 represents rgb(7.65,7.65,7.65), which is almost black)
let threshold: CGFloat = 0.03
let dotGeometry = SCNSphere(radius: dotRadius)
dotGeometry.firstMaterial?.diffuse.contents = dotColor
dotGeometry.firstMaterial?.lightingModel = SCNMaterial.LightingModel.constant
let highlightGeometry = SCNSphere(radius: dotRadius)
highlightGeometry.firstMaterial?.diffuse.contents = highlightColor
highlightGeometry.firstMaterial?.lightingModel = SCNMaterial.LightingModel.constant
let oceanGeometry = SCNSphere(radius: dotRadius)
oceanGeometry.firstMaterial?.diffuse.contents = oceanColor
oceanGeometry.firstMaterial?.lightingModel = SCNMaterial.LightingModel.constant
var positions = [SCNVector3]()
var dotNodes = [SCNNode]()
var highlightedNode: SCNNode? = nil
for i in 0...textureMap.count - 1 {
let u = textureMap[i].x
let v = textureMap[i].y
let pixelColor = self.getPixelColor(x: Int(u), y: Int(v))
let isHighlight = u == newYorkDot.x && v == newYorkDot.y
if (isHighlight) {
let dotNode = SCNNode(geometry: highlightGeometry)
dotNode.name = "NewYorkDot"
dotNode.position = textureMap[i].position
positions.append(dotNode.position)
dotNodes.append(dotNode)
print("myloc \(textureMap[i].position)")
highlightedNode = dotNode
} else if (pixelColor.red < threshold && pixelColor.green < threshold && pixelColor.blue < threshold) {
let dotNode = SCNNode(geometry: dotGeometry)
dotNode.name = "Other"
dotNode.position = textureMap[i].position
positions.append(dotNode.position)
dotNodes.append(dotNode)
}
}
DispatchQueue.main.async {
let dotPositions = positions as NSArray
let dotIndices = NSArray()
let source = SCNGeometrySource(vertices: dotPositions as! [SCNVector3])
let element = SCNGeometryElement(indices: dotIndices as! [Int32], primitiveType: .point)
let pointCloud = SCNGeometry(sources: [source], elements: [element])
let pointCloudNode = SCNNode(geometry: pointCloud)
for dotNode in dotNodes {
pointCloudNode.addChildNode(dotNode)
}
self.sceneView.scene?.rootNode.addChildNode(pointCloudNode)
//performing gestures before this causes the bug
DispatchQueue.main.asyncAfter(deadline: .now() + 5) {
if let highlightedNode = highlightedNode {
self.centerCameraOnDot(dotPosition: highlightedNode.position)
}
}
}
}
}
When
sceneView.pointOfViewis set, the camera's position and orientation are changed immediately to thepointOfViewnode's transformation, which causes the observed reset.Try and preserve the current camera transformation: before setting
sceneView.pointOfView = cameraNode, store the current camera transformation. That includes its position, rotation, and any other properties relevant to your scene setup.Then, after setting the point of view, reapply the stored transformation to the camera. That should negate the resetting effect and maintain the continuity of the scene as seen by the user.
Your
centerCameraOnDotfunction would be:See if that would help to transition the camera to the new point of view without resetting the globe's position.
Alternative approach: Updating camera node without altering
pointOfViewInstead of directly manipulating the
pointOfViewproperty ofsceneView, you can try to update thecameraNode's position and orientation based on user interactions. That approach involves intercepting user gestures and manually applying their transformations to thecameraNode. Here is an outline of how you can implement this:Add custom gesture recognizers to the
sceneViewor utilize SceneKit's default gesture handling to detect user interactions.When a user interaction is detected, calculate the necessary transformations and apply them to the
cameraNode. That keeps thecameraNodein sync with the user's perspective.When moving the camera to a new position, animate the
cameraNode's position and orientation directly, instead of usingsceneView.pointOfView.That might look like:
That approach requires a more manual handling of camera transformations but provides greater control over the camera's behavior in response to user interactions. It also avoids the issue of resetting the globe's position when changing the
pointOfView.Try also to add logs to track the camera's position and orientation before and after user interactions and when animating to a new position. That can help identify unexpected changes. And use SceneKit's debugging tools, such as showing statistics or the debug options for the SCNView, to better understand the scene's state.
Test each part of your gesture handling and camera animation code separately to isolate the cause of the issue.
Given the complexity of manually handling gestures and the requirement to not alter the camera's transform property directly, you would need to consider other approaches that work within these constraints.
Since manually handling gestures is complex, one approach is to leverage SceneKit's default camera controls. That would involve configuring the
SCNView'sallowsCameraControlproperty to handle user interactions automatically. If it is already being used, you can look into ways to extend or customize its behavior to fit your needs.Or: SceneKit offers various camera constraints that can be used to control camera behavior. For example, you could use
SCNLookAtConstraintto keep the camera focused on a specific node (like the globe) while still allowing user interaction to orbit around it. That might help in maintaining a consistent camera behavior after user interactions.Or: If the issue is primarily with the camera's state being overridden by user interactions, consider saving the camera's state before the user interacts and restoring it when needed. That involves storing the camera's position, orientation, and other relevant properties and then reapplying them before starting your animation.
Or: Using
SCNTransactionand animation blocks can provide more control over the camera animations. You can start anSCNTransaction, set its completion block to enable user interaction again, and perform the camera animation within this block. That might help in smoothly transitioning the camera without abrupt changes.Note: There might be a timing issue where the camera animation is triggered before the scene has fully processed the user's last interaction. Introducing a slight delay before starting the camera animation can sometimes resolve such timing-related issues.
If there is a possibility that the gesture recognizers are interfering with the camera animation, investigating their states right before the animation begins could provide insights. It is possible that a gesture recognizer is still active or in an unexpected state, which could affect the camera behavior.
But, as I mentioned before, adding extensive logging around the camera control and animation code can help identify any unexpected behaviors or states. Logging the camera's position, orientation, and the state of relevant properties before and after user interactions and animations can offer clues.