problem situation: Creating AR-Visualizations always at the same place (on a table) in a comfortable way. We don't want the customer to place the objects themselves like in countless ARCore/ARKit examples.
I'm wondering if there is a way to implement those steps:
- Detect marker on the table
- Use the position of the marker as the initial position of the AR-Visualization and go on with SLAM-Tracking
I know there is something like an Marker-Detection API included in the latest build of the TangoSDK. But this technology is limited to a small amount of devices (two to be exact...).
best regards and thanks in advance for any idea
I am also interested in that topic. I think the true power of AR can only be unleashed when paired with environment understanding.
I think you have two options:
- wait for the new Vuforia 7 to be released and supposedly it is going to support visual markers with ARCore and ARKit.
- Engage CoreML / Computer Vision - in theory it is possible but I haven't seen many examples. I think it might be a bit difficult to start with (e.g. build and calibrate model).
However Apple have got it sorted:
https://youtu.be/E2fd8igVQcU?t=2m58s
if using Google Tango, you can implement this using the built in Area Descriptions File (ADF) system.
The system has a holding screen and you are told to "walk around". Within a few seconds, you can relocalise to an area the device has previously been. (or pull the information from a server etc..)
Googles VPS (Visual Positioning Service) is a similar Idea, (closed Beta still) which should come to ARCore. It will, as far as I understand, allow you to localise a specific location using the camera feed from a global shared map of all scanned locations. I think, when released, it will try to fill the gap of an AR Cloud type system, which will solve these problems for regular developers.
See https://developers.google.com/tango/overview/concepts#visual_positioning_service_overview
The general problem of relocalising to a space using pre-knowledge of the space and camera feed only is solved in academia and other AR offerings, hololens etc... Markers/Tags aren't required.
I'm unsure, however, which other commercial systems provide this feature.
This is what i got so far for ARKit.
@objc func tap(_ sender: UITapGestureRecognizer){
let touchLocation = sender.location(in: sceneView)
let hitTestResult = sceneView.hitTest(touchLocation, types: .featurePoint)
if let hitResult = hitTestResult.first{
if first == nil{
first = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
}else if second == nil{
second = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
}else{
third = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
let x2 = first!.x
let z2 = -first!.z
let x1 = second!.x
let z1 = -second!.z
let z3 = -third!.z
let m = (z1-z2)/(x1-x2)
var a = atan(m)
if (x1 < 0 && z1 < 0){
a = a + (Float.pi*2)
}else if(x1 > 0 && z1 < 0){
a = a - (Float.pi*2)
}
sceneView.scene.rootNode.addChildNode(yourNode)
let rotate = SCNAction.rotateBy(x: 0, y: CGFloat(a), z: 0, duration: 0.1)
yourNode.runAction(rotate)
yourNode.position = first!
if z3 - z1 < 0{
let rotate = SCNAction.rotateBy(x: 0, y: CGFloat.pi, z: 0, duration: 0.1)
yourNode.runAction(rotate)
}
}
}
}
Theory is:
Make three dots A,B,C such that AB is perpendicular to AC. Tap dots in order A-B-C.
Find angle of AB in x=0 of ARSceneView which gives required rotation for node.
Any one of the point can be refrenced to calculate position to place node.
From C find if node needs to be flipped.
I am still working on some exceptions that needs to be satisfied.