I am wondering how I can scan an image on the iPhone and analyze the RGB value of each pixel, thus finally determining the average RGB for the whole image. If anyone could push me in the right direction it would be greatly appreciated. I am new to image analysis and not sure where to start, or if something like this is included in the iOS 5 APIs.
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
回答1:
Just Paste it, i'm detecting the color on touch .
- (void) touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event {
if (self.view.hidden==YES) {
//color wheel is hidden, so don't handle this as a color wheel event.
[[self nextResponder] touchesEnded:touches withEvent:event];
return;
}
UITouch* touch = [touches anyObject];
CGPoint point = [touch locationInView:self.view]; //where image was tapped
UIColor * lastColor = [self getPixelColorAtLocation:point];
NSLog(@"color %@",lastColor);
UIImageView *lbl=[[UIImageView alloc]initWithFrame:CGRectMake(0, 0, 100, 100)];
lbl.layer.cornerRadius=50;
[imageView addSubview:lbl];
lbl.backgroundColor=lastColor;
lbl.center=CGPointMake(stillImageFilter.center.x*320, (stillImageFilter.center.y*320)-125);
NSLog(@"stillImageCenter = %f,%f",stillImageFilter.center.x,stillImageFilter.center.y);}
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = imageView.image.CGImage;
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
CGContextDrawImage(cgctx, rect, inImage);
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
int offset = 4*((w*round(point.y))+round(point.x));
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
NSLog(@"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
CGContextRelease(cgctx);
if (data) { free(data); }
return color;
}
- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef) inImage {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
CGColorSpaceRelease( colorSpace );
return context;
}
回答2:
Look at Camera Programming Topics for iOS - Taking Pictures and Movies this will get the image in your app.
After that look at something like: how-to-get-the-rgb-values-for-a-pixel-on-an-image-on-the-iphone
回答3:
Getting the CGImage from a UIImage can give you this data
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
int pixelInfo = ((image.size.width * y) + x ) * 4; // The image is png
UInt8 red = data[pixelInfo];
UInt8 green = data[(pixelInfo + 1)];
UInt8 blue = data[pixelInfo + 2];
UInt8 alpha = data[pixelInfo + 3];
CFRelease(pixelData);
More here: Getting pixel data from UIImageView -- works on simulator, not device
And Here: Get Pixel color of UIImage