I'm testing Apple's Vision Alignment API and have questions regarding VNHomographicImageRegistrationRequest. Has anyone got it to work? I can get the warpTransform out of it, but I've yet to see a matrix that makes sense, meaning, I'm unable to get a result that warps the image back onto the source image. I'm using Opencv warpPerspective to handle the warping.
I'm calling this to get the transform:
class func homography(_ cgImage0 : CGImage!, _ cgImage1 : CGImage!, _ orientation : CGImagePropertyOrientation, completion:(matrix_float3x3?)-> ())
{
let registrationSequenceReqHandler = VNSequenceRequestHandler()
let requestHomography = VNHomographicImageRegistrationRequest(targetedCGImage: cgImage1, orientation: orientation)
let requestTranslation = VNTranslationalImageRegistrationRequest(targetedCGImage: cgImage1, orientation: orientation)
do
{
try registrationSequenceReqHandler.perform([requestHomography, requestTranslation], on: cgImage0) //reference
if let resultH = requestHomography.results?.first as? VNImageHomographicAlignmentObservation
{
completion(resultH.warpTransform)
}
if let resultT = requestTranslation.results?.first as? VNImageTranslationAlignmentObservation
{
print ("translation : \(resultT.alignmentTransform.tx) : \(resultT.alignmentTransform.ty)")
}
}
catch
{
completion(nil)
print("bad")
}
}
This works and outputs a homography matrix, but its results are drastically different than what I get when I do SIFT + Opencv findHomography (https://docs.opencv.org/3.0-beta/doc/tutorials/features2d/feature_homography/feature_homography.html)
Regardless of my image pairs, I'm unable to get reasonable homographic results from the Apple Vision dataset.
Thanks in advance,