My business app requires a feature to let the user draw a signature on a UIView
with his finger and save it (via button click in the toolbar) so it can be attached to an unit. These units are going to be uploaded to a server once the work is finished and already support camera picture attachments that are uploaded via Base64, so I simply want to convert the signature taken to an UIImage
.
First of all, I needed a solution to draw the signature, I quickly found some sample code from Apple that seemed to meet my requirements: GLPaint
I integrated this sample code into my project with slight modifications since I work with ARC and Storyboards and didn't want the sound effects and the color palette etc., but the drawing code is a straight copy.
The integration seemed to be successful since I was able to draw the signatures on the view. So, next step was to add a save/image conversion function for the drawn signatures.
I've done endless searches and rolled dozens of threads with similar problems asked and most of them pointed to the exact same solution:
(Assumptions)
- drawingView: subclassed
UIView
where the drawing is done on.) <QuartzCore/QuartzCore.h>
andQuartzCore.framework
are includedCoreGraphics.framework
is includedOpenGLES.framework
is included- (void) saveAsImage:(UIView*) drawingView { UIGraphicsBeginImageContext(drawingView.bounds.size); [drawingView.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *image = UIGraphicsGetImageFromCurrentContext(); UIGraphicsEndImageContext(); }
Finally my problem: This code doesn't work for me as it always returns a blank image. Since I've already integrated support for picture attachments taken with the iPhone camera, I initially assumpted that the image processing code should work on the signature images as well.
But.. after some resultless searching I dropped that assumption, took the original GLPaint project and just added the few lines above and some code that just shows the image and it was also completely blank. So it is either an issue with that code not working on self-drawn content on UIView
s or anything I'm missing.
I am basically out of ideas on this issue and hope some people can help me with it.
Best regards Felix
I believe your problem might be you are trying to get an image from GL context. You might search around web for that but generally all you need is to call "glReadPixels" after all "draw" calls have been made.. Something like this should work:
If you use multisampling you will need to call this after the buffers have been resolved and presenting frame buffer has been binded.