How to use environment map in ARKit?

2019-04-01 06:20发布

问题:

ARKit 2.0 added a new class named AREnvironmentProbeAnchor. Reading it's instructions, it seems that ARKit can automatically collect environment texture (cubemap?). I believe that we can now create some virtual objects reflecting the real environment.

But I am still not clear how this work, particularly how the environment texture is generated. Does anyone have simple sample code demonstrating this cool feature?

回答1:

Its pretty simple to implement environmentTexturing in your AR project.

  1. Set the environmentTexturing property on your tracking configuration to automatic (ARKit takes the video feed from your camera to automatically create a texture map. The more you move the camera around, the more accurate the texture map becomes. Machine learning is used to fill out the blanks)

    configuration.environmentTexturing = .automatic
    
  2. Environment Texturing requires physically based materials to work. Create a simple shiny sphere to test out the reflections

    let sphere = SCNSphere(radius: 0.1)
    sphere.firstMaterial?.diffuse.contents = UIColor.white
    sphere.firstMaterial?.lightingModel = .physicallyBased
    sphere.firstMaterial?.metalness.intensity = 1.0
    sphere.firstMaterial?.roughness.intensity = 0.0
    let sphereNode = SCNNode(geometry: sphere)
    sceneView.scene.rootNode.addChildNode(sphereNode)
    


回答2:

AREnvironmentProbeAnchor (works in iOS 12 and above) is an anchor for image-based lighting technology. Shader of your 3D model in a scene can reflect a light from its surroundings (of course, depending on type of a surface in that shader, so let it be a reflective chrome). The principle is simple: 6 square images from 6 photo cameras go to the env reflectivity channel of a shading material. These six cameras (a rig) have the following directions: +x/-x, +y/-y, +z/-z. This operation ARKit makes for you absolutely free. The image below illustrates 6 directions of the rig:

Adjacent cameras' zFar planes look like a Cube, don't they?

And six cameras' frustums are the volume of this Cube.

Texture's patches will be available in definite places where your camera scanned the surface. Or you can use advanced machine learning algorithms (it's more robust and easier for user) to cover a cube with complete 360 degrees textures.

AREnvironmentProbeAnchor serves for positioning this photo rig to a specific point in a scene. You need to enable texture map generation for an AR session. There are two options for this:

ARWorldTrackingConfiguration.EnvironmentTexturing.manual 

With manual environment texturing, you identify points in the scene for which you want light probe texture maps by creating AREnvironmentProbeAnchor objects and adding them to the session.

ARWorldTrackingConfiguration.EnvironmentTexturing.automatic 

With automatic environment texturing, ARKit automatically creates, positions, and adds AREnvironmentProbeAnchor objects to the session.

In both cases, ARKit automatically generates environment textures as the session collects camera imagery. Use a delegate method such as session(_:didUpdate:) to find out when a texture is available, and access it from the anchor's environmentTexture property.

If you display AR content using ARSCNView and the automaticallyUpdatesLighting option, SceneKit automatically retrieves AREnvironmentProbeAnchor texture maps and uses them to light the scene.

Here's how your code in ViewController.swift must look like:

sceneView.automaticallyUpdatesLighting = true

let torusNode = SCNNode(geometry: SCNTorus(ringRadius: 2, pipeRadius: 1.5))
sceneView.scene.rootNode.addChildNode(torusNode)

let reflectiveMaterial = SCNMaterial()
reflectiveMaterial.lightingModel = .physicallyBased
reflectiveMaterial.metalness.contents = 1.0
reflectiveMaterial.roughness.contents = 0
reflectiveMaterial.diffuse.contents = UIImage(named: "brushedMetal.png")
torusNode.geometry?.firstMaterial = [reflectiveMaterial]

if #available(iOS 12.0, *) {
    let configuration = ARWorldTrackingConfiguration()
    configuration.environmentTexturing = .automatic      // WHERE A MAGIC HAPPENS
    sceneView.session.run(configuration)
} else {
    let configuration = ARWorldTrackingConfiguration()
    sceneView.session.run(configuration)
}

func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
    print(anchors)
}

That's it. Hope this helps.

P.S. Any camera in 3D can act not only as a photo/video camera but also as a projector (for projecting textures on objects).