In the above image ,we can see point which are drawn on image ,by some openCV algorithm
.
I want to draw a UIView
point on those points ,so that user can crop it.
I am not getting how will I access those points so that i can add uiview
points.
I tried to read the cv::Point
,but value are just different(more) to the co-ordinate height and width.
static cv::Mat drawSquares( cv::Mat& image, const std::vector<std::vector<cv::Point> >& squares )
{
int max_X=0,max_Y=0;
int min_X=999,min_Y=999;
for( size_t i = 0; i < squares.size(); i++ )
{
const cv::Point* p = &squares[i][0];
int n = (int)squares[i].size();
NSLog(@"Squares%d %d %d",n,p->x,p->y);
polylines(image, &p, &n, 1, true, cv::Scalar(0,255,0), 3, cv::LINE_AA);
}
return image;
}
In above code ,drawsquare
method draw the squares .I have NSLog
the point x, y co-ordinates but these values are not w.r.t to device co-ordinate system.
Can someone help me how it can be achieved Or an alternative to my requirement.
Thanks
Actually Due to image size,the co-ordinates are map in different way,
For eg. If image size is within the boundary of screen then there is no issues,you can directly use the cvPoint as CGPoint,
But if case is that image size is 3000*2464 which is approx size of camera clicked image then u have apply some formula.
Below is the way i got from internet and it helped me to extract CGPoint from cvPoint when the size of image is more den our screen dimension
Get the scale Factor of image
Suppose this is cvPoint (_pointA variable) u have then by using the below formula u can extract it.
This is in Swift 3. In the Swift class that you're returning the
cv::Points
to:x
andy
dimensions of the image you're recording from your camera AV Capture Sessionx
andy
dimension of theUIview
you're using to visualize the image by the capture session's image dimensions in the X and Yx
andy
coordinates by the scaledx
andy
dimensions