标签:
1、从初始UIImage获取一个CIImage对象。
2、创建一个用于分析对象的CIContext。
3、通过type和options参数创建一个CIDetector实例。
type参数指定了要识别的特征类型。options参数可以设置识别特征的精确度,低精确度速度快,高精确度更准确。
4、创建一个图像数组,里面放对象的实例。
5、通过imageByCroppingToRect:方法结合原始图像以及在图像中找到的最后一个实例对象中指定的边界创建一个CIImage。这些边界表示人脸所在的CGRect。
6、通过CIImage创建一个UIImage,并在ImageView中显示。
//self.mainImageView.image选取的图片
- (IBAction)findFace:(id)sender { UIImage * image = self.mainImageView.image; CIImage * coreImage = [[CIImage alloc] initWithImage:image]; CIContext * context = [CIContext contextWithOptions:nil]; CIDetector * detector = [CIDetector detectorOfType:@"CIDetectorTypeFace"context:context options:[NSDictionary dictionaryWithObjectsAndKeys:@"CIDetectorAccuracyHigh", @"CIDetectorAccuracy", nil]]; NSArray * features = [detector featuresInImage:coreImage]; if ([features count] >0) { CIImage * faceImage = [coreImage imageByCroppingToRect:[[features lastObject] bounds]]; UIImage * face = [UIImage imageWithCGImage:[context createCGImage:faceImage fromRect:faceImage.extent]]; self.faceImageView.image = face; [self.findFaceButton setTitle:[NSString stringWithFormat:@"%lu Face(s) Found", (unsigned long)[features count]] forState:UIControlStateNormal]; self.findFaceButton.enabled = NO; self.findFaceButton.alpha = 0.6; } else { [self.findFaceButton setTitle:@"No Faces Found"forState:UIControlStateNormal]; self.findFaceButton.enabled = NO; self.findFaceButton.alpha = 0.6; } }
标签:
原文地址:http://www.cnblogs.com/fengmin/p/5586957.html