Leveling and matching exposure using CoreImage / GPUImage - objective-c

I was wondering if its possible to level exposure across a set of images using either CoreImage or GPUImage. And how would I go about that?
Example:
Say you have 4 images, but the exposure is different on the third one. How could you level the exposure so all 4 images have the same exposure?
One idea I had was measuring and matching the exposure using AVCapture, i.e. if the input image is -2.0, then simply add 2.0 using CoreImage.
Another idea is to implement histogram equalization..
Has anyone ever dealt with the same task before? Any insights?

You can use CoreImage (#import ) and ImageIO (CGImage) and extract exif metadata with EXIF Dictionary Keys keys:kCGImagePropertyExifExposureTime, kCGImagePropertyExifExposureMode, kCGImagePropertyExifExposureProgram, kCGImagePropertyExifExposureBiasValue, kCGImagePropertyExifExposureIndex, kCGImagePropertyExifWhiteBalance
- (NSDictionary*) exifData: (NSString*) path {
NSDictionary* dic = nil;
NSURL* url = [NSURL fileURLWithPath: path];
if ( url )
{
CGImageSourceRef source = CGImageSourceCreateWithURL ((CFURLRef) url, NULL);
if (source != nil)
{
CFDictionaryRef metadataRef =
CGImageSourceCopyPropertiesAtIndex (source, 0, NULL);
if (metadataRef)
{
NSDictionary* immutableMetadata = (NSDictionary *)metadataRef;
if (immutableMetadata)
{
dic = [NSDictionary dictionaryWithDictionary : (NSDictionary *)metadataRef];
}
CFRelease ( metadataRef );
}
CFRelease(source);
source = nil;
}
}
return dic;
}
Usage:
NSDictionary* dic = [self exifData: path];
if (dic)
{
NSString *s = [dic valueForKey: kCGImagePropertyExifExposureTime];
NSLog(#"Image : %# - ExposureTime : %.2f", path, [s floatValue]);
}
And then changing the exposure or white balance with:
- (UIImage *) changeImageExposure:(NSString*) imagename exposure:(float) exposure {
CIImage *inputImage = [[CIImage alloc] initWithImage:[UIImage imageNamed: imagename]];
CIFilter *exposureAdjustmentFilter = [CIFilter filterWithName:#"CIExposureAdjust"];
[exposureAdjustmentFilter setValue:inputImage forKey:kCIInputImageKey];
[exposureAdjustmentFilter setValue:[NSNumber numberWithFloat:exposure] forKey:#"inputEV"];
CIImage *outputImage = exposureAdjustmentFilter.outputImage;
CIContext *context = [CIContext contextWithOptions:nil];
return [UIImage imageWithCGImage:[context createCGImage:outputImage fromRect:outputImage.extent]];
}

Related

Red Eye Correction

To add red eye correction effect, I had followed this link:
How to remove red eye from image in iPhone?
Installed above library via pods. Added following code in my class.
UIImage *redEyeImage = [UIImage imageNamed:#"Userimage"];
if (redEyeImage) {
UIImage *newRemovedRedEyeImage = [redEyeImage redEyeCorrection];
if (newRemovedRedEyeImage) {
imgView.image = newRemovedRedEyeImage;
}
}
but I didn't get the required output. Is there anything I am missing? Or is there any other library to implement this concept. Please suggest.
Thanks!
You could set breakpoints on the lines right after both if statements and then run your app and see if they're being hit (perhaps redEyeImage doesn't exist or newRemovedRedEyeImage doesn't.
Another simple way to check is:
UIImage *redEyeImage;
UIImage *newRemovedRedEyeImage;
redEyeImage = [UIImage imageNamed:#"Userimage"];
if (redEyeImage) {
newRemovedRedEyeImage = [redEyeImage redEyeCorrection];
if (newRemovedRedEyeImage) {
imgView.image = newRemovedRedEyeImage;
}
}
NSLog(#"redEyeImage: %#", (redEyeImage != nil ? #"exists" : #"nil"));
NSLog(#"newRemovedRedEyeImage: %#", (newRemovedRedEyeImage != nil ? #"exists" : #"nil"));
then run this and see what the two logs at the bottom print.
Finally I resolved this issue with following code:
UIImage *img = [UIImage imageNamed:#"images.png"];
CIImage *image = [[CIImage alloc]initWithImage:img];
NSLog(#"after ciimage: %#", kCIImageAutoAdjustEnhance);
NSDictionary *options = [NSDictionary dictionaryWithObject:#"NO" forKey:kCIImageAutoAdjustEnhance];
NSLog(#"options: %#", options);
NSArray *adjustments = [image autoAdjustmentFiltersWithOptions:options];
NSLog(#"adjustments: %# ", adjustments);
for (CIFilter * filter in adjustments)
{
[filter setValue:image forKey:kCIInputImageKey];
image = filter.outputImage;
}
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:image fromRect:image.extent];
UIImage *enhancedImage = [[UIImage alloc] initWithCGImage:cgImage];
CGImageRelease(cgImage);
self.img.image = enhancedImage;

Loading image from url ios 8 objective c

i´m trying to obtain the imagen from this url[#"file:///var/mobile/Media/DCIM/100APPLE/IMG_0158.JPG"], but i can´t.
Always is nil.
this is my code:
NSData *data = [NSData dataWithContentsOfURL: #"file:///var/mobile/Media/DCIM/100APPLE/IMG_0158.JPG"];
UIImage *image = [UIImage imageWithData:data];
self.pruebaTmp.image = image;
i obtain the url with this code:
if (asset) {
// get photo info from this asset
PHImageRequestOptions * imageRequestOptions = [[PHImageRequestOptions alloc] init];
imageRequestOptions.synchronous = YES;
[[PHImageManager defaultManager]
requestImageDataForAsset:asset
options:imageRequestOptions
resultHandler:^(NSData *imageData, NSString *dataUTI,
UIImageOrientation orientation,
NSDictionary *info)
{
NSURL *path = [info objectForKey:#"PHImageFileURLKey"];
//asignamos el path de la imágen seleccionada en galeria
self.pathImagen = path;
}];
}
if someone could help i would be very grateful, because i can´t load the image with the url obtained.
you can`t not get UIimage or metadata from that url.
you can get UIImage from local Identifier of access
PHFetchResult *savedAssets = [PHAsset fetchAssetsWithLocalIdentifiers:#[localIdentifier] options:nil];
[savedAssets enumerateObjectsUsingBlock:^(PHAsset *asset, NSUInteger idx, BOOL *stop) {
//this gets called for every asset from its localIdentifier you saved
//PHImageRequestOptionsDeliveryModeHighQualityFormat
PHImageRequestOptions * imageRequestOptions = [[PHImageRequestOptions alloc] init];
imageRequestOptions.synchronous = NO;
imageRequestOptions.deliveryMode = PHImageRequestOptionsResizeModeFast;
imageRequestOptions.resizeMode = PHImageRequestOptionsResizeModeFast;
[[PHImageManager defaultManager]requestImageForAsset:asset targetSize:PHImageManagerMaximumSize contentMode:PHImageContentModeAspectFill options:imageRequestOptions resultHandler:^(UIImage * _Nullable result, NSDictionary * _Nullable info) {
NSLog(#"get image from result");
if (result) {
}
}];
imageRequestOptions = nil;
}];

Image in UITableView using too much memory

I created a class for download images from URLs for UITableViewCells (in this project I cannot use SDWebImageView or other codes from internet) but it looks like it's using a lot of memory and my tableview is not loading so fast. Can anybody point what is the problem?
Code:
//MyHelper class
+(NSString *)pathForImage:(NSString *)urlImageString{
if ([urlImageString class] == [NSNull class] || [urlImageString isEqualToString:#"<null>"] || [urlImageString isEqualToString:#""]) {
return #"";
}
NSArray *pathsInString = [urlImageString componentsSeparatedByString:#"/"];
NSString *eventCodeString = [pathsInString objectAtIndex:[pathsInString count] - 2];
NSString *imageNameString = [pathsInString lastObject];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES);
NSString *cachePath = [paths objectAtIndex:0];
cachePath = [MyHelper validateString:[cachePath stringByAppendingString:eventCodeString]];
[cachePath stringByAppendingString:#"/"];
return [cachePath stringByAppendingString:imageNameString];
}
+(BOOL)imageExistsForURL:(NSString *)urlString{
if (!([urlString class] == [NSNull class]))
{
NSString *filePath = [MyHelper pathForImage:urlString];
NSFileManager *fileManager = [NSFileManager defaultManager];
return [fileManager fileExistsAtPath:filePath];
}
return false;
}
+(void)setAsyncImage:(UIImageView *)imageView forDownloadImage:(NSString *)urlString{
CGRect activityFrame = CGRectMake(0, 0, 60, 60);
UIActivityIndicatorView *activity = [[UIActivityIndicatorView alloc] initWithFrame:activityFrame];
activity.layer.cornerRadius = activity.frame.size.width / 2;
activity.clipsToBounds = YES;
activity.activityIndicatorViewStyle = UIActivityIndicatorViewStyleGray;
[imageView addSubview:activity];
[activity startAnimating];
dispatch_queue_t concurrentQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(concurrentQueue, ^{
NSData *image;
if ([urlString class] == [NSNull class]) {
image = nil;
} else {
image = [[NSData alloc] initWithContentsOfURL:[NSURL URLWithString:urlString]];
}
dispatch_async(dispatch_get_main_queue(), ^{
[activity stopAnimating];
[activity removeFromSuperview];
if (image)
{
[UIView animateWithDuration:0.9 animations:^{
imageView.alpha = 0;
imageView.image = [UIImage imageWithData:image];
imageView.alpha = 1;
}];
NSString *filePath = [MyHelper pathForImage:urlString];
NSError *error;
[image writeToFile:filePath options:NSDataWritingAtomic error:&error];
}
else
{
imageView.image = [UIImage imageNamed:#"icn_male.png"];
}
});
});
}
+(NSString *)validateString:(NSString *)string{
if (string == (id)[NSNull null] || string.length == 0 )
return #"";
return string;
}
+ (UIImage*)imageWithImage:(UIImage*)image
scaledToSize:(CGSize)newSize;
{
float proportion;
if (image.size.height > image.size.width) {
proportion = image.size.height / newSize.height;
} else {
proportion = image.size.width / newSize.width;
}
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(newSize.width - (image.size.width/proportion),
newSize.height/2 - (image.size.height/proportion)/2,
image.size.width/proportion,
image.size.height/proportion)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Using this code:
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
NSString *cellIdentifier = #"MyCell";
MyCell *cell = (MyCell *)[tableView dequeueReusableCellWithIdentifier:cellIdentifier forIndexPath:indexPath];
if ([MyHelper imageExistsForURL:photoURLString ]) {
UIImage *image = [UIImage imageWithContentsOfFile:[MyHelper pathForImage:photoURLString]];
eventImageView.image = [MyHelper imageWithImage:image scaledToSize:CGSizeMake(60, 60)];
} else {
[MyHelper setAsyncImage:eventImageView forDownloadImage:photoURLString ];
}
}
Since it is now clear that you are using oversized images, the solution is to figure out how big your images need to be in order to look good in your app.
There are several courses of action depending on how much you can change the server side portion of your system.
Use an image that is optimally sized for the highest-res case (3x) and let 2x and 1x devices do the scaling. This is again a bit wasteful.
Create some scheme whereby you will be able to get the right size image for your device type (perhaps by appending 2x, 3x etc.) to the image file name. Arguably the best choice.
Do the resizing on the client side. This can be somewhat CPU intensive and is probably the worst approach in my opinion because you will be doing a lot of work unnecessarily. However, if you can't change how your server works, then this is your only option.
Another problem with your code is that you are doing the resizing on the main/UI thread, which is blocking your UI, which is a no-no. Never perform long operations on the main thread.
You should be doing it on a background thread using dispatch_async or perhaps NSOperation and a sequential queue to reduce memory usage. Note that this can create new problems because you have to update your image view when the image is ready and consider things such as whether the cell is still visible or not. I came across a nice blog post on this a while back so I suggest searching the web.
However, if the images are really huge, then maybe you could consider setting up a proxy server and then getting resized images from there instead of the main server. Of course, you would have to consider intellectual property issues in this case.

Creating an animated GIF in Cocoa - defining frame type

I've been able to adapt some code found on SO to produce an animated GIF from the "screenshots" of my view, but the results are unpredictable. GIF frames are sometimes full images, full frames ("replace" mode, as GIMP marks it), other times are just a "diff" from previous layer ("combine" mode).
From what I've seen, when there are fewer and/or smaller frames involved, the CG writes the GIF in "combine" mode, but failing to get the colors right. Actually, the moving parts are colored correctly, the background is wrong.
When CG saves the GIF as full frames, the colors are ok. The file size is larger, but hey, obviously you cannot have the best of both worlds. :)
Is there a way to either:
a) force CG to create "full frames" when saving the GIF
b) fix the colors (color table?)
What I do is (ARC mode):
capture the visible part of the view with
[[scrollView contentView] dataWithPDFInsideRect:[[scrollView contentView] visibleRect]];
convert and resize it to NSImageBitmapRep of PNG type
-(NSMutableDictionary*) pngImageProps:(int)quality {
NSMutableDictionary *pngImageProps;
pngImageProps = [[NSMutableDictionary alloc] init];
[pngImageProps setValue:[NSNumber numberWithBool:NO] forKey:NSImageInterlaced];
double compressionF = 1;
[pngImageProps setValue:[NSNumber numberWithFloat:compressionF] forKey:NSImageCompressionFactor];
return pngImageProps;
}
-(NSData*) resizeImageToData:(NSData*)data toDimX:(int)xdim andDimY:(int)ydim withQuality:(int)quality{
NSImage *image = [[NSImage alloc] initWithData:data];
NSRect inRect = NSZeroRect;
inRect.size = [image size];
NSRect outRect = NSMakeRect(0, 0, xdim, ydim);
NSImage *outImage = [[NSImage alloc] initWithSize:outRect.size];
[outImage lockFocus];
[image drawInRect:outRect fromRect:inRect operation:NSCompositeCopy fraction:1];
NSBitmapImageRep* bitmapRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:outRect];
[outImage unlockFocus];
NSMutableDictionary *imageProps = [self pngImageProps:quality];
NSData* imageData = [bitmapRep representationUsingType:NSPNGFileType properties:imageProps];
return [imageData copy];
}
get the array of BitmapReps and create the GIF
-(CGImageRef) pngRepDataToCgImageRef:(NSData*)data {
CFDataRef imgData = (__bridge CFDataRef)data;
CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData (imgData);
CGImageRef image = CGImageCreateWithPNGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault);
return image;
}
////////// create GIF from
NSArray *images; // holds all BitmapReps
CGImageDestinationRef destination = CGImageDestinationCreateWithURL((__bridge CFURLRef)[NSURL fileURLWithPath:pot],
kUTTypeGIF,
allImages,
NULL);
// set frame delay
NSDictionary *frameProperties = [NSDictionary
dictionaryWithObject:[NSDictionary
dictionaryWithObject:[NSNumber numberWithFloat:0.2f]
forKey:(NSString *) kCGImagePropertyGIFDelayTime]
forKey:(NSString *) kCGImagePropertyGIFDictionary];
// set gif color properties
NSMutableDictionary *gifPropsDict = [[NSMutableDictionary alloc] init];
[gifPropsDict setObject:(NSString *)kCGImagePropertyColorModelRGB forKey:(NSString *)kCGImagePropertyColorModel];
[gifPropsDict setObject:[NSNumber numberWithBool:YES] forKey:(NSString *)kCGImagePropertyGIFHasGlobalColorMap];
// set gif loop
NSDictionary *gifProperties = [NSDictionary
dictionaryWithObject:gifPropsDict
forKey:(NSString *) kCGImagePropertyGIFDictionary];
// loop through frames and add them to GIF
for (int i=0; i < [images count]; i++) {
NSData *imageData = [images objectAtIndex:i];
CGImageRef imageRef = [self pngRepDataToCgImageRef:imageData];
CGImageDestinationAddImage(destination, imageRef, (__bridge CFDictionaryRef) (frameProperties));
}
// save the GIF
CGImageDestinationSetProperties(destination, (__bridge CFDictionaryRef)(gifProperties));
CGImageDestinationFinalize(destination);
CFRelease(destination);
I've checked the ImageBitmapReps, when saved as PNG individually, they are just fine.
As I understood, the color tables should be handled by CG or am I responsible to produce the dithered colors? How to do that?
Even when doing the same animation repeatedly, the GIFs produced may vary.
This is a single BitmapRep
(source: andraz.eu)
And this is the GIF with the invalid colors ("combine" mode)
(source: andraz.eu)
I read your code. Please double check the "allImages" while you are creating the CGImageDestinationRef, and the "[images count]".
the follow test code works fine:
NSDictionary *prep = [NSDictionary dictionaryWithObject:[NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:0.2f] forKey:(NSString *) kCGImagePropertyGIFDelayTime] forKey:(NSString *) kCGImagePropertyGIFDictionary];
CGImageDestinationRef dst = CGImageDestinationCreateWithURL((__bridge CFURLRef)(fileURL), kUTTypeGIF, [filesArray count], nil);
for (int i=0;i<[filesArray count];i++)
{
//load anImage from array
...
CGImageRef imageRef=[anImage CGImageForProposedRect:nil context:nil hints:nil];
CGImageDestinationAddImage(dst, imageRef,(__bridge CFDictionaryRef)(prep));
}
bool fileSave = CGImageDestinationFinalize(dst);
CFRelease(dst);

defaultRepresentation fullScreenImage on ALAsset does not return full screen image

In my application I save images to an album as assets. I want also to retrieve them and display them in full screen. I use the following code :
ALAsset *lastPicture = [scrollArray objectAtIndex:iAsset];
ALAssetRepresentation *defaultRep = [lastPicture defaultRepresentation];
UIImage *image = [UIImage imageWithCGImage:[defaultRep fullScreenImage]
scale:[defaultRep scale] orientation:
(UIImageOrientation)[defaultRep orientation]];
The problem is that the image returned is nil. I have read at the ALAssetRepresentation reference that when the image does not fit it is returned nil.
I put this image to an UIImageView which has the size of the iPad screen. I was wondering if you could help me with this issue?
Thank you in advance.
I'm not a fan of fullScreenImage or fullResolutionImage. I found that when you do this on multiple assets in a queue, even if you release the UIImage immediately, memory usage will increase dramatically while it shouldn't. Also when using fullScreenImage or fullResolutionImage, the UIImage returned is still compressed, meaning that it will be decompressed before being drawn for the first time, thus on the main thread which will block your UI.
I prefer to use this method.
-(UIImage *)fullSizeImageForAssetRepresentation:(ALAssetRepresentation *)assetRepresentation
{
UIImage *result = nil;
NSData *data = nil;
uint8_t *buffer = (uint8_t *)malloc(sizeof(uint8_t)*[assetRepresentation size]);
if (buffer != NULL) {
NSError *error = nil;
NSUInteger bytesRead = [assetRepresentation getBytes:buffer fromOffset:0 length:[assetRepresentation size] error:&error];
data = [NSData dataWithBytes:buffer length:bytesRead];
free(buffer);
}
if ([data length])
{
CGImageSourceRef sourceRef = CGImageSourceCreateWithData((__bridge CFDataRef)data, nil);
NSMutableDictionary *options = [NSMutableDictionary dictionary];
[options setObject:(id)kCFBooleanTrue forKey:(id)kCGImageSourceShouldAllowFloat];
[options setObject:(id)kCFBooleanTrue forKey:(id)kCGImageSourceCreateThumbnailFromImageAlways];
[options setObject:(id)[NSNumber numberWithFloat:640.0f] forKey:(id)kCGImageSourceThumbnailMaxPixelSize];
//[options setObject:(id)kCFBooleanTrue forKey:(id)kCGImageSourceCreateThumbnailWithTransform];
CGImageRef imageRef = CGImageSourceCreateThumbnailAtIndex(sourceRef, 0, (__bridge CFDictionaryRef)options);
if (imageRef) {
result = [UIImage imageWithCGImage:imageRef scale:[assetRepresentation scale] orientation:(UIImageOrientation)[assetRepresentation orientation]];
CGImageRelease(imageRef);
}
if (sourceRef)
CFRelease(sourceRef);
}
return result;
}
You can use it like this:
// Get the full image in a background thread
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
UIImage* image = [self fullSizeImageForAssetRepresentation:asset.defaultRepresentation];
dispatch_async(dispatch_get_main_queue(), ^{
// Do something with the UIImage
});
});