ARKit 2.0 – Scanning 3D Object and generating 3D M

2019-04-02 00:19发布

问题:

The iOS 12 application now allows us to create an ARReferenceObject, and using it, can reliably recognize a position and orientation of real-world object. We can also save the finished .arobject file.

But:

ARReferenceObject contains only the spatial features information needed for ARKit to recognize the real-world object, and is not a displayable 3D reconstruction of that object.

sceneView.session.createReferenceObject(transform: simd_float4x4, 
                                           center: simd_float3, 
                                           extent: simd_float3) { 
   (ARReferenceObject?, Error?) in
        // code
}

func export(to url: URL, previewImage: UIImage?) throws { }

Is there a method that allows us to reconstruct digital 3D geometry (low-poly or high-poly) from .arobject file using Poisson Surface Reconstruction or Photogrammetry?

回答1:

You answered your own question with a quote from Apple's documentation:

An ARReferenceObject contains only the spatial feature information needed for ARKit to recognize the real-world object, and is not a displayable 3D reconstruction of that object.

If you run that sample code, you can see for yourself the visualizations it creates of the reference object during scanning and after a test recognition — it's just a sparse 3D point cloud. There's certainly no photogrammetry in what Apple's API provides you, and there'd not much to go on in terms of recovering realistic structure in a mesh.

That's not to say that such efforts are impossible — there have been some third parties demoing photogrammetry experiments based on top of ARKit. But a) that's not using ARKit 2 object scanning, just the raw pixel buffer and feature points from ARFrame, and 2) the level of extrapolation in those demos would require non-trivial original R&D, as it's far beyond the kind of information ARKit itself supplies.