-->

Apply Black and White Filter to UIImage

2020-05-19 06:26发布

问题:

I need to apply a black-and-white filter on a UIImage. I have a view in which there's a photo taken by the user, but I don't have any ideas on transforming the colors of the image.

- (void)viewDidLoad {
    [super viewDidLoad];
    self.navigationItem.title = NSLocalizedString(@"#Paint!", nil);
    imageView.image = image;
}

How can I do that?

回答1:

Objective C

- (UIImage *)convertImageToGrayScale:(UIImage *)image {


    // Create image rectangle with current image width/height
    CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);

    // Grayscale color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();

    // Create bitmap content with current image size and grayscale colorspace
    CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);

    // Draw image into current context, with specified rectangle
    // using previously defined context (with grayscale colorspace)
    CGContextDrawImage(context, imageRect, [image CGImage]);

    // Create bitmap image info from pixel data in current context
    CGImageRef imageRef = CGBitmapContextCreateImage(context);

    // Create a new UIImage object
    UIImage *newImage = [UIImage imageWithCGImage:imageRef];

    // Release colorspace, context and bitmap information
    CGColorSpaceRelease(colorSpace);
    CGContextRelease(context);
    CFRelease(imageRef);

    // Return the new grayscale image
    return newImage; 
}

Swift

func convertToGrayScale(image: UIImage) -> UIImage {

    // Create image rectangle with current image width/height
    let imageRect:CGRect = CGRect(x:0, y:0, width:image.size.width, height: image.size.height)

    // Grayscale color space
    let colorSpace = CGColorSpaceCreateDeviceGray()
    let width = image.size.width
    let height = image.size.height

    // Create bitmap content with current image size and grayscale colorspace
    let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)

    // Draw image into current context, with specified rectangle
    // using previously defined context (with grayscale colorspace)
    let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
    context?.draw(image.cgImage!, in: imageRect)
    let imageRef = context!.makeImage()

    // Create a new UIImage object
    let newImage = UIImage(cgImage: imageRef!)

    return newImage
}


回答2:

Judging by the ciimage tag, perhaps the OP was thinking (correctly) that Core Image would provide a quick and easy way to do this?

Here's that, both in ObjC:

- (UIImage *)grayscaleImage:(UIImage *)image {
    CIImage *ciImage = [[CIImage alloc] initWithImage:image];
    CIImage *grayscale = [ciImage imageByApplyingFilter:@"CIColorControls"
        withInputParameters: @{kCIInputSaturationKey : @0.0}];
    return [UIImage imageWithCIImage:grayscale];
}

and Swift:

func grayscaleImage(image: UIImage) -> UIImage {
    let ciImage = CIImage(image: image)
    let grayscale = ciImage.imageByApplyingFilter("CIColorControls",
        withInputParameters: [ kCIInputSaturationKey: 0.0 ])
    return UIImage(CIImage: grayscale)
}

CIColorControls is just one of several built-in Core Image filters that can convert an image to grayscale. CIPhotoEffectMono, CIPhotoEffectNoir, and CIPhotoEffectTonal are different tone-mapping presets (each takes no parameters), and you can do your own tone mapping with filters like CIColorMap.

Unlike alternatives that involve creating and drawing into one's own CGBitmapContext, these preserve the size/scale and alpha of the original image without extra work.



回答3:

While PiratM's solution works you lose the alpha channel. To preserve the alpha channel you need to do a few extra steps.

+(UIImage *)convertImageToGrayScale:(UIImage *)image {
    // Create image rectangle with current image width/height
    CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);

    // Grayscale color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();

    // Create bitmap content with current image size and grayscale colorspace
    CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);

    // Draw image into current context, with specified rectangle
    // using previously defined context (with grayscale colorspace)
    CGContextDrawImage(context, imageRect, [image CGImage]);

    // Create bitmap image info from pixel data in current context
    CGImageRef imageRef = CGBitmapContextCreateImage(context);

    // Release colorspace, context and bitmap information
    CGColorSpaceRelease(colorSpace);
    CGContextRelease(context);

    context = CGBitmapContextCreate(nil,image.size.width, image.size.height, 8, 0, nil, kCGImageAlphaOnly );
    CGContextDrawImage(context, imageRect, [image CGImage]);
    CGImageRef mask = CGBitmapContextCreateImage(context);

    // Create a new UIImage object
    UIImage *newImage = [UIImage imageWithCGImage:CGImageCreateWithMask(imageRef, mask)];
    CGImageRelease(imageRef);
    CGImageRelease(mask);

    // Return the new grayscale image
    return newImage;
}


回答4:

The Version of @rickster looks good considering the alpha channel. But a UIImageView without .AspectFit or Fill contentMode can't display it. Therefore the UIImage has to be created with an CGImage. This Version implemented as Swift UIImage extension keeps the current scale and gives some optional input parameters:

import CoreImage

extension UIImage
{
    /// Applies grayscale with CIColorControls by settings saturation to 0.0.
    /// - Parameter brightness: Default is 0.0.
    /// - Parameter contrast: Default is 1.0.
    /// - Returns: The grayscale image of self if available.
    func grayscaleImage(brightness: Double = 0.0, contrast: Double = 1.0) -> UIImage?
    {
        if let ciImage = CoreImage.CIImage(image: self, options: nil)
        {
            let paramsColor: [String : AnyObject] = [ kCIInputBrightnessKey: NSNumber(double: brightness),
                                                      kCIInputContrastKey:   NSNumber(double: contrast),
                                                      kCIInputSaturationKey: NSNumber(double: 0.0) ]
            let grayscale = ciImage.imageByApplyingFilter("CIColorControls", withInputParameters: paramsColor)

            let processedCGImage = CIContext().createCGImage(grayscale, fromRect: grayscale.extent)
            return UIImage(CGImage: processedCGImage, scale: self.scale, orientation: self.imageOrientation)
        }
        return nil
    }
}

The longer but faster way is the modificated version of @ChrisStillwells answer. Implemented as an UIImage extension considering the alpha channel and current scale in Swift:

extension UIImage
{
    /// Create a grayscale image with alpha channel. Is 5 times faster than grayscaleImage().
    /// - Returns: The grayscale image of self if available.
    func convertToGrayScale() -> UIImage?
    {
        // Create image rectangle with current image width/height * scale
        let pixelSize = CGSize(width: self.size.width * self.scale, height: self.size.height * self.scale)
        let imageRect = CGRect(origin: CGPointZero, size: pixelSize)
        // Grayscale color space
        if let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceGray()
        {
            // Create bitmap content with current image size and grayscale colorspace
            let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.None.rawValue)
            if let context: CGContextRef = CGBitmapContextCreate(nil, Int(pixelSize.width), Int(pixelSize.height), 8, 0, colorSpace, bitmapInfo.rawValue)
            {
                // Draw image into current context, with specified rectangle
                // using previously defined context (with grayscale colorspace)
                CGContextDrawImage(context, imageRect, self.CGImage)
                // Create bitmap image info from pixel data in current context
                if let imageRef: CGImageRef = CGBitmapContextCreateImage(context)
                {
                    let bitmapInfoAlphaOnly = CGBitmapInfo(rawValue: CGImageAlphaInfo.Only.rawValue)
                    if let contextAlpha = CGBitmapContextCreate(nil, Int(pixelSize.width), Int(pixelSize.height), 8, 0, nil, bitmapInfoAlphaOnly.rawValue)
                    {
                        CGContextDrawImage(contextAlpha, imageRect, self.CGImage)
                        if let mask: CGImageRef = CGBitmapContextCreateImage(contextAlpha)
                        {
                            // Create a new UIImage object
                            if let newCGImage = CGImageCreateWithMask(imageRef, mask)
                            {
                                // Return the new grayscale image
                                return UIImage(CGImage: newCGImage, scale: self.scale, orientation: self.imageOrientation)
                            }
                        }
                    }
                }
            }

        }
        // A required variable was unexpected nil
        return nil
    }
}


回答5:

In Swift 5, using CoreImage to do image filter,

thanks @rickster

extension UIImage{
    var grayscaled: UIImage?{
        let ciImage = CIImage(image: self)
        let grayscale = ciImage?.applyingFilter("CIColorControls",
                                                parameters: [ kCIInputSaturationKey: 0.0 ])
        if let gray = grayscale{
            return UIImage(ciImage: gray)
        }
        else{
            return nil
        }
    }
}


回答6:

updated @FBente's to Swift 5

using CoreImage to do image filtering,

extension UIImage
{
    /// Create a grayscale image with alpha channel. Is 5 times faster than grayscaleImage().
    /// - Returns: The grayscale image of self if available.
    var grayScaled: UIImage?
    {
        // Create image rectangle with current image width/height * scale
        let pixelSize = CGSize(width: self.size.width * self.scale, height: self.size.height * self.scale)
        let imageRect = CGRect(origin: CGPoint.zero, size: pixelSize)
        // Grayscale color space
         let colorSpace: CGColorSpace = CGColorSpaceCreateDeviceGray()

            // Create bitmap content with current image size and grayscale colorspace
        let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
        if let context: CGContext = CGContext(data: nil, width: Int(pixelSize.width), height: Int(pixelSize.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
            {
                // Draw image into current context, with specified rectangle
                // using previously defined context (with grayscale colorspace)
                guard let cg = self.cgImage else{
                    return nil
                }
                context.draw(cg, in: imageRect)
                // Create bitmap image info from pixel data in current context
                if let imageRef: CGImage = context.makeImage(){
                    let bitmapInfoAlphaOnly = CGBitmapInfo(rawValue: CGImageAlphaInfo.alphaOnly.rawValue)

                    guard let context = CGContext(data: nil, width: Int(pixelSize.width), height: Int(pixelSize.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfoAlphaOnly.rawValue) else{
                        return nil
                    }
                    context.draw(cg, in: imageRect)
                    if let mask: CGImage = context.makeImage() {
                        // Create a new UIImage object
                        if let newCGImage = imageRef.masking(mask){
                            // Return the new grayscale image
                            return UIImage(cgImage: newCGImage, scale: self.scale, orientation: self.imageOrientation)
                        }
                    }

                }
            }


        // A required variable was unexpected nil
        return nil
    }
}


回答7:

Swift 3.0 version:

extension UIImage {
    func convertedToGrayImage() -> UIImage? {
        let width = self.size.width
        let height = self.size.height
        let rect = CGRect(x: 0.0, y: 0.0, width: width, height: height)
        let colorSpace = CGColorSpaceCreateDeviceGray()
        let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)

        guard let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue) else {
            return nil
        }
        guard let cgImage = cgImage else { return nil }

        context.draw(cgImage, in: rect)
        guard let imageRef = context.makeImage() else { return nil }
        let newImage = UIImage(cgImage: imageRef.copy()!)

        return newImage
    }
}


回答8:

Swift3 + GPUImage

import GPUImage

extension UIImage {
    func blackWhite() -> UIImage? {
        guard let image: GPUImagePicture = GPUImagePicture(image: self) else {
            print("unable to create GPUImagePicture")
            return nil
        }
        let filter = GPUImageAverageLuminanceThresholdFilter()
        image.addTarget(filter)
        filter.useNextFrameForImageCapture()
        image.processImage()
        guard let processedImage: UIImage = filter.imageFromCurrentFramebuffer(with: UIImageOrientation.up) else {
            print("unable to obtain UIImage from filter")
            return nil
        }
        return processedImage
    }
}


回答9:

Swift 4 Solution

extension UIImage {            
    var withGrayscale: UIImage {    
        guard let ciImage = CIImage(image: self, options: nil) else { return self }    
        let paramsColor: [String: AnyObject] = [kCIInputBrightnessKey: NSNumber(value: 0.0), kCIInputContrastKey: NSNumber(value: 1.0), kCIInputSaturationKey: NSNumber(value: 0.0)]
        let grayscale = ciImage.applyingFilter("CIColorControls", parameters: paramsColor)    
        guard let processedCGImage = CIContext().createCGImage(grayscale, from: grayscale.extent) else { return self }
        return UIImage(cgImage: processedCGImage, scale: scale, orientation: imageOrientation)
    }
}


回答10:

Here is the swift 1.2 version

/// convert background image to gray scale
///
/// param: flag if true, image will be rendered in grays scale
func convertBackgroundColorToGrayScale(flag: Bool) {
    if flag == true {
        let imageRect = self.myImage.frame

        let colorSpace = CGColorSpaceCreateDeviceGray()
        let width = imageRect.width
        let height = imageRect.height

        let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.None.rawValue)
        var context = CGBitmapContextCreate(nil, Int(width), Int(height), 8, 0, colorSpace, bitmapInfo)
        let image = self.musicBackgroundColor.image!.CGImage

        CGContextDrawImage(context, imageRect, image)
        let imageRef = CGBitmapContextCreateImage(context)

        let newImage = UIImage(CGImage: CGImageCreateCopy(imageRef))

        self.myImage.image = newImage
    } else {
        // do something else
    }
}


回答11:

in swift 3.0

  func convertImageToGrayScale(image: UIImage) -> UIImage {
    // Create image rectangle with current image width/height
    let imageRect = CGRect(x: 0, y: 0,width: image.size.width, height : image.size.height)
    // Grayscale color space
    let colorSpace = CGColorSpaceCreateDeviceGray()
    // Create bitmap content with current image size and grayscale colorspace
    let context = CGContext(data: nil, width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: CGImageAlphaInfo.none.rawValue)
    // Draw image into current context, with specified rectangle
    // using previously defined context (with grayscale colorspace)
    context?.draw(image.cgImage!, in: imageRect)

    // Create bitmap image info from pixel data in current context
    let imageRef = context!.makeImage()
    // Create a new UIImage object
    let newImage = UIImage(cgImage: imageRef!)
    // Release colorspace, context and bitmap information
//MARK: ToolBar Button Methods

    // Return the new grayscale image
    return newImage
}


回答12:

This code (objective c) work:

CIImage * ciimage = ...;
CIFilter * filter = [CIFilter filterWithName:@"CIColorControls" withInputParameters:@{kCIInputSaturationKey : @0.0,kCIInputContrastKey : @10.0,kCIInputImageKey : ciimage}];
CIImage * grayscale  = [filtre outputImage];

The kCIInputContrastKey : @10.0 is to obtain an almost black and white image.