I've been bashing my face into this one for literally days now and even though I feel constantly that I am right on the edge of revelation, I simply cannot achieve my goal.
I thought, ahead of time in the conceptual phases of my design, that it would be a trivial matter to grab a image from the iPhone's camera or library, scale it down to a specified height, using a function equivalent to the Aspect Fill option of UIImageView (entirely in code), and then crop off anything that did not fit within a passed CGRect.
Getting the original image from camera or library, was trivial. I am shocked at how difficult the other two steps have proved to be.
The attached image shows what I am trying to achieve. Would someone please be kind enough to hold my hand? Every code example I have found so far seems to smash the image, be upside down, look like crap, draw out of bounds, or otherwise just not work correctly.
I propose this one. Isn't she a beauty? ;)
then you can take screen shots to your image by this
I modified Brad Larson's Code. It will aspect fill the image in given rect.
Here is a Swift 3 version of Sam Wirch's guide to swift posted by William T.
Here you go. This one is perfect ;-)
EDIT: see below comment - "Does not work with certain images, fails with: CGContextSetInterpolationQuality: invalid context 0x0 error"
An older post contains code for a method to resize your UIImage. The relevant portion is as follows:
As far as cropping goes, I believe that if you alter the method to use a different size for the scaling than for the context, your resulting image should be clipped to the bounds of the context.