I' m following the Geospatial example from Google and created the 3d model mesh at given coordinate successfully. Now I have implemented an hittest and get results but I don't' know how to detect if it hit the mesh or not
I have tried code like this without success
if (motionEvent != null) {
//val hitResultList = frame.hitTestInstantPlacement(motionEvent!!.x, motionEvent!!.y, 2.0f)
val hitResultList = frame.hitTest(motionEvent)
val debug = hitResultList.toString()
if (hitResultList.size > 0) {
Log.d(TAG, debug)
for (hit in hitResultList){
val trackable = hit.trackable
Log.d(TAG, trackable.toString())
if (hit.equals(virtualObjectMesh)){
Log.d(TAG, "tapped!")
}
if (hit.equals(earthAnchor)){
Log.d(TAG, "tapped!")
}
}
}
motionEvent = null
}
From what I can see in the documentation ("ARCore / Perform hit-tests in your Android app"), to detect if a
HitTestresult hits a givenMeshobject in ARCore or similar AR platforms, you would need to compare theTrackableobtained from the hit result with the instance of theMeshor the object that you are interested in.However, directly comparing
HitResultobjects with yourMeshusingequalsis not the correct approach, asHitResultand yourMeshobject are different types of objects.See "Comparing Objects in Java" from François Dupire and reviewed by Greg Martin
Instead, you should check the type of the
Trackableassociated with eachHitResultto see if it is of the type that you expect (for example, aPlaneor aPoint), and then, if yourMeshobject is attached to anAnchoror has some identifiable characteristics, you should use those to determine if the hit test has intersected with yourMesh.Here,
yourMeshAnchorshould be theAnchoryou used when placing yourMeshin the AR scene.Here,
yourMeshAnchorshould be theAnchoryou used when placing yourMeshin the AR scene.So, the exact method to identify if a hit test intersects your specific
Meshdepends on how yourMeshis integrated into the AR scene, and the AR platform's capabilities you are using. If yourMeshis a custom object not directly supported as aTrackabletype in ARCore, you may need to implement additional logic to map betweenTrackableobjects and your custom objects.In ARCore, when you perform a hit test, the API returns
HitResultobjects that referenceTrackabletypes likePlaneandPoint. These represent real-world surfaces and points that ARCore has detected.Anchors, however, are not directly returned as part of hit test results; instead, they are used to create stable positions in the world for placing objects.
When you have a custom OBJ model attached to a terrain anchor, the hit test will not directly indicate a hit on the "Anchor" itself but rather on the trackable surface (like a Plane) upon which the anchor and model are placed.
Considering your scenario with ARCore Geospatial, it is important to understand that Geospatial anchors are used to position content in real-world coordinates, but detecting interactions with the content (like a custom model) attached to these anchors requires to map hit test results to the anchored content.
In other words: the challenge is determining whether a hit test, initiated by a user's touch input, intersects with a custom object (e.g., a 3D model) that does not directly correspond to the types of
Trackableobjects (Plane,Point) that ARCore's hit tests usually detect.The complete logic involves two main steps:
HitResultobjects.That would involve:
MotionEvent.MotionEventto perform a hit test against the AR scene, which returns a list ofHitResultobjects. These objects represent intersections withTrackablesurfaces in the AR world.getModelBounds()is a hypothetical function that you would implement based on the specific dimensions and shape of your 3D model.isHitWithinModelBounds(hitPose, modelAnchorPose, modelBounds)checks whether a given hit test result falls within the defined bounds of your model, considering the model's anchor pose as the reference point. The loop throughhitResultListevaluates each hit test result to determine if it interacts with your model.True:
Earthclass behaves differently from otherTrackabletypes.As noted, the
Earthclass, which represents the planet in ARCore's Geospatial API, does not support hit testing directly. That means you cannot directly hit test against theEarthitself to determine if a user's interaction (e.g., a tap) intersects with a Geospatial anchor or the associated virtual content.That limitation necessitates a more manual approach to determining interactions with virtual objects placed via Geospatial anchors, relying on spatial mathematics rather than ARCore's built-in hit testing mechanisms for immediate results.
Since
Earthis a global trackable that represents the entire globe and supports placing anchors with latitude, longitude, and altitude, it does not support direct hit testing throughFrame.hitTest(MotionEvent). That is logical, as hit testing is generally used to find intersections with detected planes, points, or other local features in the user's immediate environment.Geospatial anchors are tied to specific real-world coordinates, but are not directly interactable through the standard hit test mechanism. Instead, they are positioned relative to the Earth trackable.
Given this limitation, if you want to detect interactions with virtual objects placed at Geospatial anchor locations, you must implement a custom solution that involves:
determining the screen position of the geospatial anchor: convert the geospatial anchor's 3D world position to a 2D screen position. That can be done using ARCore's
Cameramethods to project the anchor's world space position to screen coordinates.handling user input when the user interacts with the screen (e.g., tapping), compare the 2D screen coordinates of the tap with the 2D screen coordinates of your Geospatial anchors' virtual objects.
implementing spatial mathematics: If the distance between the tap coordinates and the virtual object's screen coordinates is within a certain threshold, you can consider the tap as interacting with that virtual object.
As a simplified example:
YOUR_THRESHOLDis a tolerance value you define based on how precise you want the tap detection to be relative to the virtual object's screen position.That would be a workaround for the limitation noted in your observation, by leveraging the conversion of world space positions to screen space and then using basic distance calculations to infer interactions.