Creating retina screenshot programmatically result

2019-05-14 15:00发布

I am trying to take a retina screenshot programmatically and I have tried every approach found online, but I was not able to get the screenshot to be retina.

I understand the following private API:

UIGetScreenImage();

cannot be used as Apple will reject your app. However, this method returns exactly what I need (640x960 screenshot of the screen).

I have tried this method on my iPhone 4 as well as the iPhone 4 simulator on retina hardware, but the resulting image is always 320x480.

-(UIImage *)captureView
{

    AppDelegate *appdelegate = [[UIApplication sharedApplication]delegate];


    if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)])
        UIGraphicsBeginImageContextWithOptions(appdelegate.window.bounds.size, NO, 0.0);
        else
            UIGraphicsBeginImageContext(appdelegate.window.bounds.size);


            [appdelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    NSLog(@"SIZE: %@", NSStringFromCGSize(image.size));
    NSLog(@"scale: %f", [UIScreen mainScreen].scale);


    return image;
}

I have also tried the Apple recommended way:

- (UIImage*)screenshot
{
    // Create a graphics context with the target size
    // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
    // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
    CGSize imageSize = [[UIScreen mainScreen] bounds].size;
    if (NULL != UIGraphicsBeginImageContextWithOptions)
        UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
        else
            UIGraphicsBeginImageContext(imageSize);

            CGContextRef context = UIGraphicsGetCurrentContext();

            // Iterate over every window from back to front
            for (UIWindow *window in [[UIApplication sharedApplication] windows])
            {
                if (![window respondsToSelector:@selector(screen)] || [window screen] == [UIScreen mainScreen])
                {
                    // -renderInContext: renders in the coordinate space of the layer,
                    // so we must first apply the layer's geometry to the graphics context
                    CGContextSaveGState(context);
                    // Center the context around the window's anchor point
                    CGContextTranslateCTM(context, [window center].x, [window center].y);
                    // Apply the window's transform about the anchor point
                    CGContextConcatCTM(context, [window transform]);
                    // Offset by the portion of the bounds left of and above the anchor point
                    CGContextTranslateCTM(context,
                                          -[window bounds].size.width * [[window layer] anchorPoint].x,
                                          -[window bounds].size.height * [[window layer] anchorPoint].y);

                    // Render the layer hierarchy to the current context
                    [[window layer] renderInContext:context];

                    // Restore the context
                    CGContextRestoreGState(context);
                }
            }

    // Retrieve the screenshot image
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

    UIGraphicsEndImageContext();

    NSLog(@"Size: %@", NSStringFromCGSize(image.size));

    return image;
}

But it also returns a non retina image: 2012-12-23 19:57:45.205 PostCard[3351:707] size: {320, 480}

Is there something obvious I'm missing? How come there methods that are suppose to take retina screenshot return me non retina screenshots? Thanks in advance!

1条回答
你好瞎i
2楼-- · 2019-05-14 15:38

I don't see anything wrong in your code. Apart from image.size, have you tried logging image.scale? Is it 1 or 2? If it's 2, it is actually a retina image.

UIImage.scale represents the scale of the image. So an image with UIImage.size being 320×480 and UIImage.scale being 2 has an actual size of 640×960. From Apple's doc:

If you multiply the logical size of the image (stored in the size property) by the value in this property, you get the dimensions of the image in pixels.

It's the same idea as when you load an image into a UIImage with the @2x modifier. For example:

a.png (100×80)      => size=100×80 scale=1
b@2x.png (200×160)  => size=100×80 scale=2
查看更多
登录 后发表回答