的UIImage成为模糊时,它是scaled.Why?(的iOS 5.0)(UIImage beco

2019-06-23 11:06发布

UIImage的总是变得不明显时,它是scaled.What可以怎么做,如果让它保持清晰?

- (UIImage *)rescaleImageToSize:(CGSize)size {
    CGRect rect = CGRectMake(0.0, 0.0, size.width, size.height);
    UIGraphicsBeginImageContext(rect.size);
    [self drawInRect:rect];  // scales image to rect
    UIImage *resImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return resImage;
}

Answer 1:

四舍五入

首先,确保你缩放前四舍五入你的尺寸。 drawInRect:可以在这种情况下模糊的另外可用的图像。 四舍五入到最接近的整数值:

size.width = truncf(size.width);
size.height = truncf(size.height);

对于某些任务,可能要向下取整(floorf)或围捕(ceilf)来代替。

CILanczosScaleTransform不可用

然后,不顾我以前CILanczosScaleTransform的建议。 虽然核心图像的部分在IOS 5.0是可用的,兰克泽斯缩放不是。 如果它没有变得可用时,利用它。 对于人们在Mac OS上的工作,它是可用的,使用它。

VIMAGE缩放

然而,在提供高品质缩放算法VIMAGE 。 下面的图片显示了如何使用它(vImageScaledImage)的方法与不同的上下文内插的选项进行比较。 还要注意这些选项如何在不同的缩放级别表现不同。

在这个图 ,它保留了最详细的路线:

在这照片 ,在左下叶比较:

在此照片中,右下方比较纹理:

不要使用它的像素艺术 ; 它创建奇缩放假象:

虽然在一些图像具有有趣的舍入的效果:

性能

毫不奇怪,kCGImageInterpolationHigh是最慢的标准图像插值选项。 vImageScaledImage,因为在这里实现,仍然比较慢。 对于分形图像缩小到原来的一半大小,它采取110%的UIImageInterpolationHigh的时间。 对于缩小四分之一,花的时间340%。

否则可能会想,如果你在模拟器中运行它; 在那里,它可以比kCGImageInterpolationHigh快得多。 想必VIMAGE多核心优化技术给它在桌面上的相对优势。

// Method: vImageScaledImage:(UIImage*) sourceImage withSize:(CGSize) destSize
// Returns even better scaling than drawing to a context with kCGInterpolationHigh.
// This employs the vImage routines in Accelerate.framework.
// For more information about vImage, see https://developer.apple.com/library/mac/#documentation/performance/Conceptual/vImage/Introduction/Introduction.html#//apple_ref/doc/uid/TP30001001-CH201-TPXREF101
// Large quantities of memory are manually allocated and (hopefully) freed here.  Test your application for leaks before and after using this method.
- (UIImage*) vImageScaledImage:(UIImage*) sourceImage withSize:(CGSize) destSize;
{
    UIImage *destImage = nil;

    if (sourceImage)
    {
        // First, convert the UIImage to an array of bytes, in the format expected by vImage.
        // Thanks: http://stackoverflow.com/a/1262893/1318452
        CGImageRef sourceRef = [sourceImage CGImage];
        NSUInteger sourceWidth = CGImageGetWidth(sourceRef);
        NSUInteger sourceHeight = CGImageGetHeight(sourceRef);
        CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
        unsigned char *sourceData = (unsigned char*) calloc(sourceHeight * sourceWidth * 4, sizeof(unsigned char));
        NSUInteger bytesPerPixel = 4;
        NSUInteger sourceBytesPerRow = bytesPerPixel * sourceWidth;
        NSUInteger bitsPerComponent = 8;
        CGContextRef context = CGBitmapContextCreate(sourceData, sourceWidth, sourceHeight,
                                                     bitsPerComponent, sourceBytesPerRow, colorSpace,
                                                     kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Big);
        CGContextDrawImage(context, CGRectMake(0, 0, sourceWidth, sourceHeight), sourceRef);
        CGContextRelease(context);

        // We now have the source data.  Construct a pixel array
        NSUInteger destWidth = (NSUInteger) destSize.width;
        NSUInteger destHeight = (NSUInteger) destSize.height;
        NSUInteger destBytesPerRow = bytesPerPixel * destWidth;
        unsigned char *destData = (unsigned char*) calloc(destHeight * destWidth * 4, sizeof(unsigned char));

        // Now create vImage structures for the two pixel arrays.
        // Thanks: https://github.com/dhoerl/PhotoScrollerNetwork
        vImage_Buffer src = {
            .data = sourceData,
            .height = sourceHeight,
            .width = sourceWidth,
            .rowBytes = sourceBytesPerRow
        };

        vImage_Buffer dest = {
            .data = destData,
            .height = destHeight,
            .width = destWidth,
            .rowBytes = destBytesPerRow
        };

        // Carry out the scaling.
        vImage_Error err = vImageScale_ARGB8888 (
                                                 &src,
                                                 &dest,
                                                 NULL,
                                                 kvImageHighQualityResampling 
                                                 );

        // The source bytes are no longer needed.
        free(sourceData);

        // Convert the destination bytes to a UIImage.
        CGContextRef destContext = CGBitmapContextCreate(destData, destWidth, destHeight,
                                                         bitsPerComponent, destBytesPerRow, colorSpace,
                                                         kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Big);
        CGImageRef destRef = CGBitmapContextCreateImage(destContext);

        // Store the result.
        destImage = [UIImage imageWithCGImage:destRef];

        // Free up the remaining memory.
        CGImageRelease(destRef);

        CGColorSpaceRelease(colorSpace);
        CGContextRelease(destContext);

        // The destination bytes are no longer needed.
        free(destData);

        if (err != kvImageNoError)
        {
            NSString *errorReason = [NSString stringWithFormat:@"vImageScale returned error code %d", err];
            NSDictionary *errorInfo = [NSDictionary dictionaryWithObjectsAndKeys:
                                       sourceImage, @"sourceImage", 
                                       [NSValue valueWithCGSize:destSize], @"destSize",
                                       nil];

            NSException *exception = [NSException exceptionWithName:@"HighQualityImageScalingFailureException" reason:errorReason userInfo:errorInfo];

            @throw exception;
        }
    }
    return destImage;
}


文章来源: UIImage become Fuzzy when it was scaled.Why?(iOS 5.0)