标签:
当用户进行某些更改(裁剪,消除红眼,...),在内置 Photos.app iOS上,这些更改将不会应用到由相应
fullResolutionImage
返回的
ALAssetRepresentation
照片.
但是,这些更改应用到
thumbnail
并由
fullScreenImage
返回的
ALAssetRepresentation
.
此外,有关应用的更改信息可以在
ALAssetRepresentation
的元数据字典通过key
@"AdjustmentXMP"
发现.
我想这些更改应用到
fullResolutionImage
自己保持一致性.我发现在 iOS6的+
CIFilter
的
filterArrayFromSerializedXMP: inputImageExtent:error:
可以在此XMP元数据转换的
CIFilter
数组的:
ALAssetRepresentation *rep;
NSString *xmpString = rep.metadata[@"AdjustmentXMP"];
NSData *xmpData = [xmpString dataUsingEncoding:NSUTF8StringEncoding];
CIImage *image = [CIImage imageWithCGImage:rep.fullResolutionImage];
NSError *error = nil;
NSArray *filterArray = [CIFilter filterArrayFromSerializedXMP:xmpData
inputImageExtent:image.extent
error:&error];
if (error) {
NSLog(@"Error during CIFilter creation: %@", [error localizedDescription]);
}
CIContext *context = [CIContext contextWithOptions:nil];
for (CIFilter *filter in filterArray) {
[filter setValue:image forKey:kCIInputImageKey];
image = [filter outputImage];
}
然而,这仅适用于某些过滤器(裁剪,自动提升),但不是为别人像去除红眼.在这些情况下,
CIFilter
s有没有可见的效果.因此,我的问题:
CIFilter
? (在与Photos.app一致的方式,与关键
kCIImageAutoAdjustRedEye
该过滤器是不够的.例如,它没有考虑对眼睛的位置参数.)
ALAssetRepresentation* representation = [[self assetAtIndex:index] defaultRepresentation];
// create a buffer to hold the data for the asset‘s image
uint8_t *buffer = (Byte*)malloc(representation.size);// copy the data from the asset into the buffer
NSUInteger length = [representation getBytes:buffer fromOffset: 0.0 length:representation.size error:nil];
if (length==0) return nil;
// convert the buffer into a NSData object, free the buffer after
NSData *adata = [[NSData alloc] initWithBytesNoCopy:buffer length:representation.size freeWhenDone:YES];
// setup a dictionary with a UTI hint. The UTI hint identifies the type of image we are dealing with (ie. a jpeg, png, or a possible RAW file)
// specify the source hint
NSDictionary* sourceOptionsDict = [NSDictionary dictionaryWithObjectsAndKeys:
(id)[representation UTI] ,kCGImageSourceTypeIdentifierHint,
nil];
// create a CGImageSource with the NSData. A image source can contain x number of thumbnails and full images.
CGImageSourceRef sourceRef = CGImageSourceCreateWithData((CFDataRef) adata, (CFDictionaryRef) sourceOptionsDict);
[adata release];
CFDictionaryRef imagePropertiesDictionary;
// get a copy of the image properties from the CGImageSourceRef
imagePropertiesDictionary = CGImageSourceCopyPropertiesAtIndex(sourceRef,0, NULL);
CFNumberRef imageWidth = (CFNumberRef)CFDictionaryGetValue(imagePropertiesDictionary, kCGImagePropertyPixelWidth);
CFNumberRef imageHeight = (CFNumberRef)CFDictionaryGetValue(imagePropertiesDictionary, kCGImagePropertyPixelHeight);
int w = 0;
int h = 0;
CFNumberGetValue(imageWidth, kCFNumberIntType, &w);
CFNumberGetValue(imageHeight, kCFNumberIntType, &h);
// cleanup memory
CFRelease(imagePropertiesDictionary);
解读XMP元数据中ALAssetRepresentation
标签:
原文地址:http://www.cnblogs.com/allanliu/p/4191974.html