I want to develop the live face filters as MSQRD/Snapchat live filters but did not able to find out how should I proceed should I use Augmented Reality framework and detect face OR use core image to detect the face and process accordingly. Please let me know if anyone has the idea how to implement the same?
相关问题
- Core Data lightweight migration crashes after App
- How can I implement password recovery in an iPhone
- State preservation and restoration strategies with
- “Zero out” sensitive String data in Swift
- Get the NSRange for the visible text after scroll
相关文章
- 现在使用swift开发ios应用好还是swift?
- UITableView dragging distance with UIRefreshContro
- TCC __TCCAccessRequest_block_invoke
- Where does a host app handle NSExtensionContext#co
- Swift - hide pickerView after value selected
- How do you detect key up / key down events from a
- didBeginContact:(SKPhysicsContact *)contact not in
- Attempt to present UIAlertController on View Contr
I am testing using Unity + OpenCV for unity. Now will try how ofxfacetracker makes the gesture tracking. Filters can be done unsing gles shaders available in unity, there are also lots of plugins in the assets store that help in the real time render that you need.
I would recommend going with
Core Image
and CIDetector. https://developer.apple.com/library/ios/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_detect_faces/ci_detect_faces.html It has been available since iOS 5 and it has great documentation.Creating a face detector example:
Here’s what the code does:
1.- Creates a context; in this example, a context for iOS. You can use any of the context-creation functions described in Processing Images.) You also have the option of supplying nil instead of a context when you create the detector.)
2.- Creates an options dictionary to specify accuracy for the detector. You can specify low or high accuracy. Low accuracy (CIDetectorAccuracyLow) is fast; high accuracy, shown in this example, is thorough but slower.
3.- Creates a detector for faces. The only type of detector you can create is one for human faces.
4.- Sets up an options dictionary for finding faces. It’s important to let Core Image know the image orientation so the detector knows where it can find upright faces. Most of the time you’ll read the image orientation from the image itself, and then provide that value to the options dictionary.
5.- Uses the detector to find features in an image. The image you provide must be a CIImage object. Core Image returns an array of CIFeature objects, each of which represents a face in the image.
Here some open projects that could help you out to start with
CoreImage
or other technologies asGPUImage
orOpenCV
1 https://github.com/aaronabentheuer/AAFaceDetection (CIDetector - Swift)
2 https://github.com/BradLarson/GPUImage (Objective-C)
3 https://github.com/jeroentrappers/FaceDetectionPOC (Objective-C: it has deprecated code for iOS9)
4 https://github.com/kairosinc/Kairos-SDK-iOS (Objective-C)
5 https://github.com/macmade/FaceDetect (OpenCV)
I am developing the same kind of app. I used OFxfacetracker library from OpenFramework for this. It provide mesh which contain eyes, mouth, face border, nose position and points (vertices).
You can use this.