Mapping image onto 3D face mesh

2020-02-20 09:23发布

问题:

I am using the iPhone X and ARFaceKit to capture the user's face. The goal is to texture the face mesh with the user's image.

I'm only looking at a single frame (an ARFrame) from the AR session. From ARFaceGeometry, I have a set of vertices that describe the face. I make a jpeg representation of the current frame's capturedImage.

I then want to find the texture coordinates that map the created jpeg onto the mesh vertices. I want to: 1. map the vertices from model space to world space; 2. map the vertices from world space to camera space; 3. divide by image dimensions to get pixel coordinates for the texture.

let geometry: ARFaceGeometry = contentUpdater.faceGeometry!
let theCamera = session.currentFrame?.camera

let theFaceAnchor:SCNNode = contentUpdater.faceNode
let anchorTransform = float4x4((theFaceAnchor?.transform)!)

for index in 0..<totalVertices {
    let vertex = geometry.vertices[index]

    // Step 1: Model space to world space, using the anchor's transform
    let vertex4 = float4(vertex.x, vertex.y, vertex.z, 1.0)
    let worldSpace = anchorTransform * vertex4

    // Step 2: World space to camera space
    let world3 = float3(worldSpace.x, worldSpace.y, worldSpace.z)
    let projectedPt = theCamera?.projectPoint(world3, orientation: .landscapeRight, viewportSize: (theCamera?.imageResolution)!)

    // Step 3: Divide by image width/height to get pixel coordinates
    if (projectedPt != nil) {
        let vtx = projectedPt!.x / (theCamera?.imageResolution.width)!
        let vty = projectedPt!.y / (theCamera?.imageResolution.height)!
        textureVs += "vt \(vtx) \(vty)\n"
    }
}

This is not working, but instead gets me a very funky looking face! Where am I going wrong?

回答1:

Texturing the face mesh with the user's image is now available in the Face-Based sample code published by Apple (section Map Camera Video onto 3D Face Geometry).

One can map camera video onto 3D Face Geometry using this following shader modifier.

// Transform the vertex to the camera coordinate system.
float4 vertexCamera = scn_node.modelViewTransform * _geometry.position;

// Camera projection and perspective divide to get normalized viewport coordinates (clip space).
float4 vertexClipSpace = scn_frame.projectionTransform * vertexCamera;
vertexClipSpace /= vertexClipSpace.w;

// XY in clip space is [-1,1]x[-1,1], so adjust to UV texture coordinates: [0,1]x[0,1].
// Image coordinates are Y-flipped (upper-left origin).
float4 vertexImageSpace = float4(vertexClipSpace.xy * 0.5 + 0.5, 0.0, 1.0);
vertexImageSpace.y = 1.0 - vertexImageSpace.y;

// Apply ARKit's display transform (device orientation * front-facing camera flip).
float4 transformedVertex = displayTransform * vertexImageSpace;

// Output as texture coordinates for use in later rendering stages.
_geometry.texcoords[0] = transformedVertex.xy;


回答2:

The start point is different:

Apply the following changes to your code:

//let vty = projectedPt!.y / (theCamera?.imageResolution.height)!
let vty = ((theCamera?.imageResolution.height)! - projectedPt!.y) / (theCamera?.imageResolution.height)!

You can get Normal Face.



回答3:

For proper UV mapping you need to use ARSCNFaceGeometry class instead of ARFaceGeometry class you're using in your code.

ARSCNFaceGeometry is a SceneKit's representation of face topology for use with face information provided by an ARSession. It's used for a quick visualization of face geometry using SceneKit's rendering engine.

ARSCNFaceGeometry class is a subclass of SCNGeometry that wraps the mesh data provided by the ARFaceGeometry class. You can use ARSCNFaceGeometry to quickly and easily visualize face topology and facial expressions provided by ARKit in a SceneKit view.

But ARSCNFaceGeometry is available only in SceneKit views or renderers that use Metal. This class is not supported for OpenGL-based SceneKit rendering.