I'm looking through the Apple's Vision API documentation and I see a couple of classes that relate to text detection in UIImages
:
1) class VNDetectTextRectanglesRequest
It looks like they can detect characters, but I don't see a means to do anything with the characters. Once you've got characters detected, how would you go about turning them into something that can be interpreted by NSLinguisticTagger
?
Here's a post that is a brief overview of Vision
.
Thank you for reading.
This is how to do it ...
You'll find the complete project here included is the trained model !