iOS: How to convert the self-drawn content of an U

2019-07-16 13:12发布

My business app requires a feature to let the user draw a signature on a UIView with his finger and save it (via button click in the toolbar) so it can be attached to an unit. These units are going to be uploaded to a server once the work is finished and already support camera picture attachments that are uploaded via Base64, so I simply want to convert the signature taken to an UIImage.

First of all, I needed a solution to draw the signature, I quickly found some sample code from Apple that seemed to meet my requirements: GLPaint

I integrated this sample code into my project with slight modifications since I work with ARC and Storyboards and didn't want the sound effects and the color palette etc., but the drawing code is a straight copy.

The integration seemed to be successful since I was able to draw the signatures on the view. So, next step was to add a save/image conversion function for the drawn signatures.

I've done endless searches and rolled dozens of threads with similar problems asked and most of them pointed to the exact same solution:

(Assumptions)

  • drawingView: subclassed UIView where the drawing is done on.)
  • <QuartzCore/QuartzCore.h> and QuartzCore.framework are included
  • CoreGraphics.framework is included
  • OpenGLES.framework is included

    - (void) saveAsImage:(UIView*) drawingView
    {
        UIGraphicsBeginImageContext(drawingView.bounds.size);
        [drawingView.layer renderInContext:UIGraphicsGetCurrentContext()];
        UIImage *image = UIGraphicsGetImageFromCurrentContext();
        UIGraphicsEndImageContext();
    }
    

Finally my problem: This code doesn't work for me as it always returns a blank image. Since I've already integrated support for picture attachments taken with the iPhone camera, I initially assumpted that the image processing code should work on the signature images as well.

But.. after some resultless searching I dropped that assumption, took the original GLPaint project and just added the few lines above and some code that just shows the image and it was also completely blank. So it is either an issue with that code not working on self-drawn content on UIViews or anything I'm missing.

I am basically out of ideas on this issue and hope some people can help me with it.

Best regards Felix

1条回答
成全新的幸福
2楼-- · 2019-07-16 13:38

I believe your problem might be you are trying to get an image from GL context. You might search around web for that but generally all you need is to call "glReadPixels" after all "draw" calls have been made.. Something like this should work:

BOOL createSnapshot;
int viewWidth, viewHeigth;
if(createSnapshot) {
    uint8_t *iData = new uint8_t[viewHeigth * viewWidth * 4];
    glReadPixels(0, 0, viewWidth, viewHeigth, GL_RGBA, GL_UNSIGNED_BYTE, iData);

    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, iData, (viewWidth * viewHeigth * 4), NULL);
    CGColorSpaceRef cref = CGColorSpaceCreateDeviceRGB();
    CGImageRef cgImage = CGImageCreate(viewWidth, viewHeigth, 8, 32, viewWidth*4, cref, kCGBitmapByteOrderDefault, provider, NULL, NO, kCGRenderingIntentDefault);

    UIImage *ret = [UIImage imageWithCGImage:cgImage scale:1.0f]; //the image you need

    CGImageRelease(cgImage);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(cref);
    delete [] iData;

    createSnapshot = NO;
}

If you use multisampling you will need to call this after the buffers have been resolved and presenting frame buffer has been binded.

查看更多
登录 后发表回答