Are there any limitations in Vuforia compared to A

2019-02-03 15:38发布

问题:

I am a beginner in the field of augmented reality, working on applications that create plans of buildings (floor plan, room plan, etc with accurate measurements) using a smartphone. So I am researching about the best AR SDK which can be used for this. There are not many articles pitting Vuforia against ARCore and ARKit.

Please suggest the best SDK to use, pros and cons of each.

回答1:

Updated: January 29, 2019.

At the moment ARCore is for Android and iOS, a younger ARKit is for iOS only (iPhone 6s and higher) and great old Vuforia runs on both iOS and Android too. Vuforia uses ARCore/ARKit technology if the hardware it's running on supports it, otherwise it uses its own AR technology and engine (software solution without dependant hardware).

However, there's a problem for developing for OEM smartphones running Android: all these devices need a calibration in order to observe the same AR experience.

ARCore

The current version of mature ARCore is 1.6. Last major updates brought such significant features as Open Source Sceneform, Point Cloud IDs, multiplayer support and improved documentation. As we all know, ARCore allows a mobile device to understand and track its position and orientation in 6 degrees of freedom (6-DoF) relative to the world using Concurrent Odometry and Mapping (COM) and allows to detect the size and location of three type of surfaces: horizontal, vertical and angled surfaces like the ground, tables, benches, walls, etc. using visual data from camera and from gyroscope and accelerometer (the same way operate ARKit and Vuforia). ARCore's understanding of the real world lets you place 3D objects and 2D annotations in a way that integrates with the real world. So, you can place a virtual cup on the corner of your real-world coffee table using Anchor (used for Android apps) or GARAnchor (used for iOS apps) class.

There are three main fundamental concepts of ARCore: Motion Tracking, Environmental Understanding and Light Estimation. As I said earlier, when you move your phone through the real scene, ARCore uses COM method to understand where the phone is, relative to the world around it. ARCore detects so called feature points in the captured RGB image sequence and uses them to compute phone's change in location. The visual information is combined with measurements from the Inertial Measurement Unit (IMU) to estimate the position and orientation of the camera over time. Then ARCore looks for clusters of feature points that appear to lie on horizontal or vertical surfaces and makes these surfaces available to your app as planes. So, you can use all this data to place virtual objects into your scene. Textured 3D geometry is rendered by ARCore's companion – Sceneform (there's a real-time Physically Based Rendering engine – Filament). Also, ARCore can detect information about the lighting of its environment and provide you with the average intensity and color correction of a given camera image. This info lets you light your virtual objects under the same conditions as the environment around them, increasing the sense of realism.

ARCore is much older than ARKit. Do you remember Tango? So ARCore is a Tango without Depth camera. A wise acquisition of FlyBy and MetaIO helped Apple to catch up. Suppose, it's very good for AR industry. ARCore requires Android 7.0 Nougat or later, supports OpenGL ES 3.1 acceleration, and integrates with Unity, Unreal, and Web applications. At the moment the most powerful chipsets for AR experience on Android platform are Kirin 980 and Snapdragon 855. ARCore price: free.

Here's how ARCore project written in Kotlin language looks like:

// MainActivity.kt

class MainActivity: AppCompatActivity() {

    lateinit var fragment: ArFragment

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)
        setSupportActionBar(toolbar)
        fragment = supportFragmentManager.findFragmentById(R.id.sceneform_fragment) as ArFragment

        but.setOnClickListener {
            addObject(Uri.parse("Helicopter.sfa"))
        }
    }
    private fun addObject(parse: Uri) {
        val frame = fragment.arSceneView.arFrame
        val point = getScreenCenter()
        if (frame != null) {
            val hits = frame.hitTest(point.x.toFloat(), point.y.toFloat())

            for (hit in hits) {
                val trackable = hit.trackable
                if (trackable is Plane && trackable.isPoseInPolygon(hit.hitPose)) {
                    placeObject(fragment, hit.createAnchor(), parse)
                    break
                }
            }
        }
    }
    private fun placeObject(fragment: ArFragment, createAnchor: Anchor, model: Uri) {
        ModelRenderable.builder()
            .setSource(fragment.context, model)
            .build()
            .thenAccept {
                addNodeToScene(fragment, createAnchor, it)
            }
            .exceptionally {
                val builder = AlertDialog.Builder(this)
                builder.setMessage(it.message)
                    .setTitle("Hey, I'm error!")
                val dialog = builder.create()
                dialog.show()
                return@exceptionally null
            }
    }
    private fun addNodeToScene(fragment: ArFragment, createAnchor: Anchor, renderable: ModelRenderable) {
        val anchorNode = AnchorNode(createAnchor)
        val rotatingNode = RotatingNode()
        val transformableNode = TransformableNode(fragment.transformationSystem)
        rotatingNode.renderable = renderable
        rotatingNode.addChild(transformableNode)
        rotatingNode.setParent(anchorNode)
        fragment.arSceneView.scene.addChild(anchorNode)
        transformableNode.select()
    }
    override fun onCreateOptionsMenu(menu: Menu): Boolean {
        menuInflater.inflate(R.menu.menu_main, menu)
        return true
    }
    private fun getScreenCenter(): android.graphics.Point {
        val s = findViewById<View>(android.R.id.content)
        return Point(s.width/2, s.height/2)
    }
    override fun onOptionsItemSelected(item: MenuItem): Boolean {
        return when (item.itemId) {
            R.id.action_settings -> true
            else -> super.onOptionsItemSelected(item)
        }
    }
}

ARKit

ARKit has many useful features for accurate measurements. For example, its latest release 2.0 adds ARWorldMap functionality for Persistent AR experiences. This allows you to save a world map when your app becomes inactive, then restore it the next time your app launches in the same physical environment. You can intensively use ARAnchor parent class from the resumed world map to place the same virtual content at the same positions from the saved session. Measurement tools in ARKit are very accurate. ARKit has 3D-tracking, 2D-tracking, vertical/horizontal planes detection, image detection, QR-code detection (use Vision framework), 3D object detection, etc. Using iBeacons in ARKit, an iBeacon-aware application can know what room it is in, and show the correct 3D/2D content for that room.

ARKit uses Visual Inertial Odometry (VIO) to accurately track the world around it. VIO method is very similar to Concurrent Odometry and Mapping used in ARCore. There are also three main fundamental concepts in ARKit: World Tracking, Scene Understanding (what includes such stages as Plane Detection, Hit-Testing and Light Estimation), and Rendering with a help of ARKit's companion – SceneKit framework. VIO fuses RGB camera sensor data at 60 fps with Core-Motion data (IMU) at 1000 fps. SceneKit renders its 3D geometry at 60/120 fps. It should be noted, that due to very high energy impact (because there's an enormous burden on CPU and GPU), your iPhone's battery will be drained pretty quickly.

When you build ARKit 2.0 compatible iOS-application on a Mac, you need Xcode 10 and mobile device running iOS 12. ARKit supports Metal 2.0 and OpenGL GPU acceleration. ARKit has a multiuser support (up to 6 people) and brand-new USDZ file format by Pixar (good for sophisticated 3D scenes), and, of course, physically based rendering a.k.a. PBR. ARKit integrates with Unity and Unreal. At the moment the most powerful chipset for AR experience on iOS platform is A12 Bionic. ARKit price: free.

Here's how ARKit project written in Swift language looks like:

// ViewController.swift

import UIKit
import SceneKit
import ARKit

class ViewController: UIViewController, ARSCNViewDelegate {

    @IBOutlet var sceneView: ARSCNView!
    let spotLight = SCNNode()
    let ambientLight = SCNNode()

    override func viewDidLoad() {
        super.viewDidLoad()
        sceneView.delegate = self
        sceneView.showsStatistics = true
        let scene = SCNScene(named: "art.scnassets/Helicopter.scn")!
        sceneView.scene = scene

        spotLight.light = SCNLight()
        spotLight.light!.type = .spot
        spotLight.light!.intensity = 55
        ambientLight.light = SCNLight()
        ambientLight.light!.type = .ambient
        ambientLight.light!.intensity = 40

        sceneView.scene.rootNode.addChildNode(spotLight)
        sceneView.scene.rootNode.addChildNode(ambientLight)     
    }
    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)
        let configuration = ARWorldTrackingConfiguration()
        configuration.isAutoFocusEnabled = false
        configuration.planeDetection = [.horizontal, .vertical]
        configuration.environmentTexturing = .automatic
        configuration.isLightEstimationEnabled = true
        sceneView.session.run(configuration)
    }
    override func viewWillDisappear(_ animated: Bool) {
        super.viewWillDisappear(animated)
        sceneView.session.pause()
    }
    func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
        let node = SCNNode()
        return node
    }
    func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
        guard let lightEstimate = sceneView.session.currentFrame?.lightEstimate else { 
            return 
        } 
        spotLight.light!.intensity = lightEstimate.ambientIntensity / 1000
    }
}

Vuforia

PTC Vuforia Engine 7.5 boasts approximately the same capabilities that you can find in the latest versions of ARKit and ARCore as well as new features, such as External Camera support. Vuforia has a standalone version and a version baked directly into Unity. It has the following functionality: Model Targets (allow to recognize objects by shape using pre-existing 3D models), Image Targets (the easiest way to put AR content on flat objects), Multi Targets (for objects with flat surfaces and multiple sides), Cylinder Targets (for placing AR content on objects with cylindrical shapes), Ground Plane, VuMarks (allow to identify and add content to series of objects), Fusion, and Object Targets (for scanning an object). Vuforia Fusion is a new capability designed to solve the problem of fragmentation in AR enabling technologies such as cameras, sensors, chipsets, and software frameworks like ARKit. With Vuforia Fusion, your application will automatically provide the best experience possible with no extra work required on your end. External Camera provides a new perspective on what’s possible with Augmented Reality. It allows Vuforia Engine to access external video sources beyond the camera equipped in phones and tablets. By using an independent camera, developers can create an AR experience that offers a first-person view from toys, robots or industrial tools.

Vuforia supports Metal acceleration for iOS devices. Also you can use Vuforia Samples for your projects. For example: the Vuforia Core Samples library includes various scenes using Vuforia features, including a pre-configured Object Recognition scene that you can use as a reference and starting point for Object Recognition application.

Vuforia SDK Price (there are four options): Free (you need to register for a free Development License Key), $499 per Classic license (for apps built for companies with revenue under $10 Million/year), $99/month per Cloud license and PTC also provides a Pro license with personal price (with no revenue restriction).

There's no significant limitations for developing in Vuforia compared to ARCore and ARKit.

For more detailed info, read Why is ARKit better than the alternatives? article.

For me, ARKit has a greater measurement accuracy without any need to calibrate a scene. Vuforia's measurement accuracy depends on what platform you're developing for. Pay attention: Vuforia Chalk application uses Apple's ARKit.

Experiment with free AR apps from App Store and Google Play.

Here's how Unity's Vuforia project written in C# language looks like (The following example shows how to augment a Vuforia Image Target with a custom 3D model):

using UnityEngine;
using Vuforia;
using System.Collections;

public class MyPrefabInstantiator : MonoBehaviour, ITrackableEventHandler {
    private TrackableBehaviour mTrackableBehaviour;
    public Transform myModelPrefab;
    void Start() {
        mTrackableBehaviour = GetComponent<TrackableBehaviour>();
        if (mTrackableBehaviour) {
            mTrackableBehaviour.RegisterTrackableEventHandler(this);
        }
    }
    void Update() { }
    public void OnTrackableStateChanged(TrackableBehaviour.Status previousStatus,
                                        TrackableBehaviour.Status newStatus) { 
        if (newStatus == TrackableBehaviour.Status.DETECTED ||
            newStatus == TrackableBehaviour.Status.TRACKED ||
            newStatus == TrackableBehaviour.Status.EXTENDED_TRACKED) {
                OnTrackingFound();
        }
    } 
    private void OnTrackingFound() {
        if (myModelPrefab != null) {
            Transform myModelTrf = GameObject.Instantiate(myModelPrefab) as Transform;
            myModelTrf.parent = mTrackableBehaviour.transform;
            myModelTrf.localPosition = new Vector3(0f, 0f, 0f);
            myModelTrf.localRotation = Quaternion.identity;
            myModelTrf.localScale = new Vector3(0.0005f, 0.0005f, 0.0005f);
            myModelTrf.gameObject.active = true;
        }
    }
}