ios GPUImage,bad result of image processing with s

2020-03-06 21:00发布

I'm trying prepare image for OCR,I use GPUImage to do it,code work fine till i crop image!!After cropping i got bad result...

Crop area: https://www.dropbox.com/s/e3mlp25sl6m55yk/IMG_0709.PNG

Bad Result=( https://www.dropbox.com/s/wtxw7li6paltx21/IMG_0710.PNG

+ (UIImage *) doBinarize:(UIImage *)sourceImage
{

    //first off, try to grayscale the image using iOS core Image routine
    UIImage * grayScaledImg = [self grayImage:sourceImage];

    GPUImagePicture *imageSource = [[GPUImagePicture alloc] initWithImage:grayScaledImg];

    GPUImageAdaptiveThresholdFilter *stillImageFilter = [[GPUImageAdaptiveThresholdFilter alloc] init];
    stillImageFilter.blurRadiusInPixels = 8.0;
    [stillImageFilter prepareForImageCapture];

    [imageSource addTarget:stillImageFilter];
    [imageSource processImage];
    UIImage *retImage = [stillImageFilter imageFromCurrentlyProcessedOutput];

    [imageSource removeAllTargets];

    return retImage;
}


+ (UIImage *) grayImage :(UIImage *)inputImage
{
    // Create a graphic context.
    UIGraphicsBeginImageContextWithOptions(inputImage.size, NO, 1.0);
    CGRect imageRect = CGRectMake(0, 0, inputImage.size.width, inputImage.size.height);

    // Draw the image with the luminosity blend mode.
    // On top of a white background, this will give a black and white image.
    [inputImage drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0];

    // Get the resulting image.
    UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return outputImage;
}

UPDATE:

In the meantime, when you crop your images, do so to the nearest multiple of 8 pixels in width and you should see the correct result

Thank u @Brad Larson ! i resize image width to nearest multiple of 8 and get what i want

-(UIImage*)imageWithMultiple8ImageWidth:(UIImage*)image
{
    float fixSize = next8(image.size.width);

    CGSize newSize = CGSizeMake(fixSize, image.size.height);
    UIGraphicsBeginImageContext( newSize );
    [image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
    UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return newImage;
}

float next8(float n) {

    int bits = (int)n & 7; // give us the distance to the previous 8
    if (bits == 0)
        return (float)n;
    return (float)n + (8-bits);
}

标签: ios ocr gpuimage
1条回答
神经病院院长
2楼-- · 2020-03-06 21:43

Before I even get to the core issue here, I should point out that the GPUImageAdaptiveThresholdFilter already does a conversion to grayscale as a first step, so your -grayImage: code in the above is unnecessary and will only slow things down. You can remove all that code and just pass your input image directly to the adaptive threshold filter.

What I believe is the problem here is a recent set of changes to the way that GPUImagePicture pulls in image data. It appears that images which aren't a multiple of 8 pixels wide end up looking like the above when imported. Some fixes were proposed about this, but if the latest code from the repository (not CocoaPods, which is often out of date relative to the GitHub repository) is still doing this, some more work may need to be done.

In the meantime, when you crop your images, do so to the nearest multiple of 8 pixels in width and you should see the correct result.

查看更多
登录 后发表回答