How can I fix a Core Image's CILanczosScaleTra

2020-04-21 08:17发布

问题:

I want to implement an image downscaling algorithm for iOS. After reading that Core Images's CILanczosScaleTransform was a great fit for it, I implemented it the following way:

public func resizeImage(_ image: UIImage, targetWidth: CGFloat) -> UIImage? {
    assert(targetWidth > 0.0)

    let scale = Double(targetWidth) / Double(image.size.width)

    guard let ciImage = CIImage(image: image) else {
        fatalError("Couldn't create CIImage from image in input")
    }

    guard let filter = CIFilter(name: "CILanczosScaleTransform") else {
        fatalError("The filter CILanczosScaleTransform is unavailable on this device.")
    }

    filter.setValue(ciImage, forKey: kCIInputImageKey)
    filter.setValue(scale, forKey: kCIInputScaleKey)

    guard let result = filter.outputImage else {
        fatalError("No output on filter.")
    }

    guard let cgImage = context.createCGImage(result, from: result.extent) else {
        fatalError("Couldn't create CG Image")
    }

    return UIImage(cgImage: cgImage)
}

It works well but I get a classic border artifact probably due to the pixel-neighborhood base of the algorithm. I couldn't find anything in Apple's doc about this. Is there something smarter than rendering a bigger image and then crop the border to solve this issue?

回答1:

You can use imageByClampingToExtent.

Calling this method ... creates an image of infinite extent by repeating pixel colors from the edges of the original image.

You could use it like this:

...
guard let ciImage = CIImage(image: image)?.clampedToExtent() else {
    fatalError("Couldn't create CIImage from image in input")
}

See more information here: Apple Doc for clampedtoextent