I've implemented a CIDetector
in my app to detect rectangles on a image, but now how can i use the returned CGPoint
's to crop the image so that i can display it back?
For the perspective i've tried applying the CIPerspectiveCorrection filter, but couldn't get it to work.
I've searched around and found some clues but couldn't find a solution in Swift.
How do i use the data provided by the CIDetector
(detected rectangle) to fix perspective and crop my image?
For anyone who might not be familiar with what a CIDetectorTypeRectangle
returns: it returns 4 CGPoint
's bottomLeft,bottomRight,topLeft,topRight.
Here's what worked:
Wherever you detect your rectangle: