Processing images using Leptonica in an Xcode project - objective-c

In Xcode, I am trying to pre process an image prior to sending it to OCR'ing. The OCR engine, Tesseract, handles images based on the Leptonica library.
As an example:
The Leptonica feature pixConvertTo8("image.tif")... is there a way to "transfer" the image raw data from UIImage -> PIX (see pix.h from the leptonica library) -> perform the pixConvertTo8() and back from PIX -> UImage - and this preferably without saving it to a file for transition - all in memory.
- (void) processImage:(UIImage *) uiImage
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
// preprocess UIImage here with fx: pixConvertTo8();
CGSize imageSize = [uiImage size];
int bytes_per_line = (int)CGImageGetBytesPerRow([uiImage CGImage]);
int bytes_per_pixel = (int)CGImageGetBitsPerPixel([uiImage CGImage]) / 8.0;
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider([uiImage CGImage]));
const UInt8 *imageData = CFDataGetBytePtr(data);
// this could take a while.
char* text = tess->TesseractRect(imageData,
bytes_per_pixel,
bytes_per_line,
0, 0,
imageSize.width, imageSize.height);

these two functions will do the trick....
- (void) startTesseract
{
//code from http://robertcarlsen.net/2009/12/06/ocr-on-iphone-demo-1043
NSString *dataPath =
[[self applicationDocumentsDirectory]stringByAppendingPathComponent:#"tessdata"];
/*
Set up the data in the docs dir
want to copy the data to the documents folder if it doesn't already exist
*/
NSFileManager *fileManager = [NSFileManager defaultManager];
// If the expected store doesn't exist, copy the default store.
if (![fileManager fileExistsAtPath:dataPath]) {
// get the path to the app bundle (with the tessdata dir)
NSString *bundlePath = [[NSBundle mainBundle] bundlePath];
NSString *tessdataPath = [bundlePath stringByAppendingPathComponent:#"tessdata"];
if (tessdataPath) {
[fileManager copyItemAtPath:tessdataPath toPath:dataPath error:NULL];
}
}
NSString *dataPathWithSlash = [[self applicationDocumentsDirectory] stringByAppendingString:#"/"];
setenv("TESSDATA_PREFIX", [dataPathWithSlash UTF8String], 1);
// init the tesseract engine.
tess = new tesseract::TessBaseAPI();
tess->Init([dataPath cStringUsingEncoding:NSUTF8StringEncoding], "eng");
}
- (NSString *) ocrImage: (UIImage *) uiImage
{
//code from http://robertcarlsen.net/2009/12/06/ocr-on-iphone-demo-1043
CGSize imageSize = [uiImage size];
double bytes_per_line = CGImageGetBytesPerRow([uiImage CGImage]);
double bytes_per_pixel = CGImageGetBitsPerPixel([uiImage CGImage]) / 8.0;
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider([uiImage CGImage]));
const UInt8 *imageData = CFDataGetBytePtr(data);
imageThresholder = new tesseract::ImageThresholder();
imageThresholder->SetImage(imageData,(int) imageSize.width,(int) imageSize.height,(int)bytes_per_pixel,(int)bytes_per_line);
// this could take a while. maybe needs to happen asynchronously.
tess->SetImage(imageThresholder->GetPixRect());
char* text = tess->GetUTF8Text();
// Do something useful with the text!
NSLog(#"Converted text: %#",[NSString stringWithCString:text encoding:NSUTF8StringEncoding]);
return [NSString stringWithCString:text encoding:NSUTF8StringEncoding]
}
You will have to declare both tess and imageThresholder in the .h file
tesseract::TestBaseApi *tess;
tesseract::ImageThresholder *imageThresholder;

I've found some good code snippets in the Tesseract OCR engine about how to do this. Noticeably in class ImageThresholder inside thresholder.cpp - see link below. I didn't test it yet but here is some short description:
the interesting part for me is the else block wherein the depth is 32. here the
pixCreate()
pixGetdata()
pixgetwpl() do the acctual work.
The thresholder.cpp from the tesseract engine uses the above mentioned method

Related

NSPasteBoard _setData:forType NSImage fails for PNG file

I am using the code from this site to copy an image file to clipboard. This is the full source code
#import <Foundation/Foundation.h>
#import <Cocoa/Cocoa.h>
#import <unistd.h>
BOOL copy_to_clipboard(NSString *path)
{
// http://stackoverflow.com/questions/2681630/how-to-read-png-image-to-nsimage
NSImage * image;
if([path isEqualToString:#"-"])
{
// http://caiustheory.com/read-standard-input-using-objective-c
NSFileHandle *input = [NSFileHandle fileHandleWithStandardInput];
image = [[NSImage alloc] initWithData:[input readDataToEndOfFile]];
}else
{
image = [[NSImage alloc] initWithContentsOfFile:path];
}
// http://stackoverflow.com/a/18124824/148668
BOOL copied = false;
if (image != nil)
{
NSPasteboard *pasteboard = [NSPasteboard generalPasteboard];
[pasteboard clearContents];
NSArray *copiedObjects = [NSArray arrayWithObject:image];
copied = [pasteboard writeObjects:copiedObjects];
[pasteboard release];
}
[image release];
return copied;
}
int main(int argc, char * const argv[])
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
if(argc<2)
{
printf("Usage:\n\n"
"Copy file to clipboard:\n ./impbcopy path/to/file\n\n"
"Copy stdin to clipboard:\n cat /path/to/file | ./impbcopy -");
return EXIT_FAILURE;
}
NSString *path= [NSString stringWithUTF8String:argv[1]];
BOOL success = copy_to_clipboard(path);
[pool release];
return (success?EXIT_SUCCESS:EXIT_FAILURE);
}
When I run the compiled binary with a PNG file, I get this error
$ ~/bin/imgbcopy prof/combined.png
2017-10-25 16:24:50.373 imgbcopy[80618:4292276] -[NSPasteBoard _setData:forType:index:usesPboardTypes:] returns false. Type: public.tiff, index: 0 class: NSImage.
Copying the PNG image from bash pipe also fails
$ cat prof/combined.png | ~/bin/imgbcopy -
2017-10-25 16:27:52.856 imgbcopy[80690:4293881] -[NSPasteBoard _setData:forType:index:usesPboardTypes:] returns false. Type: public.tiff, index: 0 class: NSImage.
Testing with it another random PNG screenshot works fine. I notice the error message above says Type: public.tiff. The PNG was initially converted from SVG using ImageMagic.
What is the problem with the code, or is it a malformed PNG?
PNG File in question.
I did some tests and the results are quite remarkable.
First test:
NSData *imageData = [NSData dataWithContentsOfFile:[#"~/Downloads/combined.png" stringByExpandingTildeInPath]];
NSPasteboard *pasteBoard = [NSPasteboard generalPasteboard];
[pasteBoard clearContents];
NSPasteboardItem *pasteboardItem = [[NSPasteboardItem alloc] init];
[pasteboardItem setData:imageData forType:NSPasteboardTypePNG];
NSLog(#"Write result: %i", [pasteBoard writeObjects:#[pasteboardItem]]);
NSImage *image = [[NSImage alloc] initWithContentsOfFile:[#"~/Downloads/combined.png" stringByExpandingTildeInPath]];
NSUInteger length = [[image TIFFRepresentation] length];
NSLog(#"%f MB (%lu)", length / 1000.0 / 1000.0, length);
[pasteBoard writeObjects:#[image]];
Output:
2017-11-30 15:57:53.317349+0100 Lolo[4927:639480] Write result: 1
2017-11-30 15:57:54.831260+0100 Lolo[4927:639480] 347.824566 MB (347824566)
2017-11-30 15:57:55.293900+0100 Lolo[4927:639480] -[NSPasteBoard _setData:forType:index:usesPboardTypes:] returns false. Type: public.tiff, index: 1 class: NSImage.
I could write the image as PNG data to the pasteboard. The NSImage object can't be written like in your case. NSImage objects store themselves in multiple formats including TIFF data, so just to test I print the size of the image data it would write to the pasteboard. It's 347.824566 MB, so that might be a bit to big.
Second test:
NSPasteboard *pasteBoard = [NSPasteboard generalPasteboard];
NSData *data = [pasteBoard dataForType:NSPasteboardTypeTIFF];
float length = [data length];
NSLog(#"%f MB (%f) - %# - %#", length / 1000.0 / 1000.0, length, data, [pasteBoard types]);
Output:
2017-11-30 15:49:51.018708+0100 Lolo[4786:624058] 0.000000 MB (0.000000) - (null) - (
"public.tiff",
"NeXT TIFF v4.0 pasteboard type",
"dyn.ah62d4rv4gu8zazwuqm10c6xemf1gq54uqm10c6xenv61a3k",
PVPboardInfoPboardType
)
In the second test I opened the image in Preview. I copied the image, got no error. Checking the clipboard (pasteboard) in the Finder says it has an TIFF image, but shows nothing. Printing the contents using pasteboard code, reveals that the pasteboard contains the TIFF type, but no data.
It seems that Mac OS (macOS) is simply not capable to store a image as big as the one used.

AVFoundation - why can't I get the video orientation right

I am using AVCaptureSession to capture video from a devices camera and then using AVAssetWriterInput and AVAssetTrack to compress/resize the video before uploading it to a server. The final videos will be viewed on the web via an html5 video element.
I'm running into multiple issues trying to get the orientation of the video correct. My app only supports landscape orientation and all captured videos should be in landscape orientation. However, I would like to allow the user to hold their device in either landscape direction (i.e. home button on either the left or the right hand side).
I am able to make the video preview show in the correct orientation with the following line of code
_previewLayer.connection.videoOrientation = UIDevice.currentDevice.orientation;
The problems start when processing the video via AVAssetWriterInput and friends. The result does not seem to account for the left vs. right landscape mode the video was captured in. IOW, sometimes the video comes out upside down. After some googling I found many people suggesting that the following line of code would solve this issue
writerInput.transform = videoTrack.preferredTransform;
...but this doesn't seem to work. After a bit of debugging I found that videoTrack.preferredTransform is always the same value, regardless of the orientation the video was captured in.
I tried manually tracking what orientation the video was captured in and setting the writerInput.transform to CGAffineTransformMakeRotation(M_PI) as needed. Which solved the problem!!!
...sorta
When I viewed the results on the device this solution worked as expected. Videos were right-side-up regardless of left vs. right orientation while recording. Unfortunately, when I viewed the exact same videos in another browser (chrome on a mac book) they were all upside-down!?!?!?
What am I doing wrong?
EDIT
Here's some code, in case it's helpful...
-(void)compressFile:(NSURL*)inUrl;
{
NSString* fileName = [#"compressed." stringByAppendingString:inUrl.lastPathComponent];
NSError* error;
NSURL* outUrl = [PlatformHelper getFilePath:fileName error:&error];
NSDictionary* compressionSettings = #{ AVVideoProfileLevelKey: AVVideoProfileLevelH264Main31,
AVVideoAverageBitRateKey: [NSNumber numberWithInt:2500000],
AVVideoMaxKeyFrameIntervalKey: [NSNumber numberWithInt: 30] };
NSDictionary* videoSettings = #{ AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: [NSNumber numberWithInt:1280],
AVVideoHeightKey: [NSNumber numberWithInt:720],
AVVideoScalingModeKey: AVVideoScalingModeResizeAspectFill,
AVVideoCompressionPropertiesKey: compressionSettings };
NSDictionary* videoOptions = #{ (id)kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] };
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
writerInput.expectsMediaDataInRealTime = YES;
AVAssetWriter* assetWriter = [AVAssetWriter assetWriterWithURL:outUrl fileType:AVFileTypeMPEG4 error:&error];
assetWriter.shouldOptimizeForNetworkUse = YES;
[assetWriter addInput:writerInput];
AVURLAsset* asset = [AVURLAsset URLAssetWithURL:inUrl options:nil];
AVAssetTrack* videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
// !!! this line does not work as expected and causes all sorts of issues (videos display sideways in some cases) !!!
//writerInput.transform = videoTrack.preferredTransform;
AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:videoOptions];
AVAssetReader* assetReader = [AVAssetReader assetReaderWithAsset:asset error:&error];
[assetReader addOutput:readerOutput];
[assetWriter startWriting];
[assetWriter startSessionAtSourceTime:kCMTimeZero];
[assetReader startReading];
[writerInput requestMediaDataWhenReadyOnQueue:_processingQueue usingBlock:
^{
/* snip */
}];
}
The problem is that modifying the writerInput.transform property only adds a tag in the video file metadata which instructs the video player to rotate the file during playback. That's why the videos play in the correct orientation on your device (I'm guessing they also play correctly in a Quicktime player as well).
The pixel buffers captured by the camera are still laid out in the orientation in which they were captured. Many video players will not check for the preferred orientation metadata tag and will just play the file in the native pixel orientation.
If you want the user to be able to record video holding the phone in either landscape mode, you need to rectify this at the AVCaptureSession level before compression by performing a transform on the CVPixelBuffer of each video frame. This Apple Q&A covers it (look at the AVCaptureVideoOutput documentation as well):
https://developer.apple.com/library/ios/qa/qa1744/_index.html
Investigating the link above is the correct way to solve your problem. An alternate fast n' dirty way to solve the same problem would be to lock the recording UI of your app into only one landscape orientation and then to rotate all of your videos server-side using ffmpeg.
In case it's helpful for anyone, here's the code I ended up with. I ended up having to do the work on the video as it was being captured instead of as a post processing step. This is a helper class that manages the capture.
Interface
#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>
#interface VideoCaptureManager : NSObject<AVCaptureVideoDataOutputSampleBufferDelegate>
{
AVCaptureSession* _captureSession;
AVCaptureVideoPreviewLayer* _previewLayer;
AVCaptureVideoDataOutput* _videoOut;
AVCaptureDevice* _videoDevice;
AVCaptureDeviceInput* _videoIn;
dispatch_queue_t _videoProcessingQueue;
AVAssetWriter* _assetWriter;
AVAssetWriterInput* _writerInput;
BOOL _isCapturing;
NSString* _gameId;
NSString* _authToken;
}
-(void)setSettings:(NSString*)gameId authToken:(NSString*)authToken;
-(void)setOrientation:(AVCaptureVideoOrientation)orientation;
-(AVCaptureVideoPreviewLayer*)getPreviewLayer;
-(void)startPreview;
-(void)stopPreview;
-(void)startCapture;
-(void)stopCapture;
#end
Implementation (w/ a bit of editing and a few little TODO's)
#implementation VideoCaptureManager
-(id)init;
{
self = [super init];
if (self) {
NSError* error;
_videoProcessingQueue = dispatch_queue_create("VideoQueue", DISPATCH_QUEUE_SERIAL);
_captureSession = [AVCaptureSession new];
_videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
_previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:_captureSession];
[_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
_videoOut = [AVCaptureVideoDataOutput new];
_videoOut.videoSettings = #{ (id)kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] };
_videoOut.alwaysDiscardsLateVideoFrames = YES;
_videoIn = [AVCaptureDeviceInput deviceInputWithDevice:_videoDevice error:&error];
// handle errors here
[_captureSession addInput:_videoIn];
[_captureSession addOutput:_videoOut];
}
return self;
}
-(void)setOrientation:(AVCaptureVideoOrientation)orientation;
{
_previewLayer.connection.videoOrientation = orientation;
for (AVCaptureConnection* item in _videoOut.connections) {
item.videoOrientation = orientation;
}
}
-(AVCaptureVideoPreviewLayer*)getPreviewLayer;
{
return _previewLayer;
}
-(void)startPreview;
{
[_captureSession startRunning];
}
-(void)stopPreview;
{
[_captureSession stopRunning];
}
-(void)startCapture;
{
if (_isCapturing) return;
NSURL* url = put code here to create your output url
NSDictionary* compressionSettings = #{ AVVideoProfileLevelKey: AVVideoProfileLevelH264Main31,
AVVideoAverageBitRateKey: [NSNumber numberWithInt:2500000],
AVVideoMaxKeyFrameIntervalKey: [NSNumber numberWithInt: 1],
};
NSDictionary* videoSettings = #{ AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: [NSNumber numberWithInt:1280],
AVVideoHeightKey: [NSNumber numberWithInt:720],
AVVideoScalingModeKey: AVVideoScalingModeResizeAspectFill,
AVVideoCompressionPropertiesKey: compressionSettings
};
_writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
_writerInput.expectsMediaDataInRealTime = YES;
NSError* error;
_assetWriter = [AVAssetWriter assetWriterWithURL:url fileType:AVFileTypeMPEG4 error:&error];
// handle errors
_assetWriter.shouldOptimizeForNetworkUse = YES;
[_assetWriter addInput:_writerInput];
[_videoOut setSampleBufferDelegate:self queue:_videoProcessingQueue];
_isCapturing = YES;
}
-(void)stopCapture;
{
if (!_isCapturing) return;
[_videoOut setSampleBufferDelegate:nil queue:nil]; // TODO: seems like there could be a race condition between this line and the next (could end up trying to write a buffer after calling writingFinished
dispatch_async(_videoProcessingQueue, ^{
[_assetWriter finishWritingWithCompletionHandler:^{
[self writingFinished];
}];
});
}
-(void)writingFinished;
{
// TODO: need to check _assetWriter.status to make sure everything completed successfully
// do whatever post processing you need here
}
-(void)captureOutput:(AVCaptureOutput*)captureOutput didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection;
{
NSLog(#"Video frame was dropped.");
}
-(void)captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if(_assetWriter.status != AVAssetWriterStatusWriting) {
CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
[_assetWriter startWriting]; // TODO: need to check the return value (a bool)
[_assetWriter startSessionAtSourceTime:lastSampleTime];
}
if (!_writerInput.readyForMoreMediaData || ![_writerInput appendSampleBuffer:sampleBuffer]) {
NSLog(#"Failed to write video buffer to output.");
}
}
#end
For compressing /Resizing the video ,we can use AVAssetExportSession.
We can uppload a video of duration 3.30minutes.
If the video duration will be more than 3.30minutes,it will show a memory warning .
As here we are not using any transform for the video,the video will be as it is while recording.
Below is the sample code for compressing the video .
we can check the video size before compression and after compression.
{
-(void)trimVideoWithURL:(NSURL *)inputURL{
NSString *path1 = [inputURL path];
NSData *data = [[NSFileManager defaultManager] contentsAtPath:path1];
NSLog(#"size before compress video is %lu",(unsigned long)data.length);
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:inputURL options:nil];
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPreset640x480];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *outputURL = paths[0];
NSFileManager *manager = [NSFileManager defaultManager];
[manager createDirectoryAtPath:outputURL withIntermediateDirectories:YES attributes:nil error:nil];
outputURL = [outputURL stringByAppendingPathComponent:#"output.mp4"];
fullPath = [NSURL URLWithString:outputURL];
// Remove Existing File
[manager removeItemAtPath:outputURL error:nil];
exportSession.outputURL = [NSURL fileURLWithPath:outputURL];
exportSession.shouldOptimizeForNetworkUse = YES;
exportSession.outputFileType = AVFileTypeQuickTimeMovie;
CMTime start = CMTimeMakeWithSeconds(1.0, 600);
CMTime duration = CMTimeMakeWithSeconds(1.0, 600);
CMTimeRange range = CMTimeRangeMake(start, duration);
exportSession.timeRange = range;
[exportSession exportAsynchronouslyWithCompletionHandler:^(void)
{
switch (exportSession.status) {
case AVAssetExportSessionStatusCompleted:{
NSString *path = [fullPath path];
NSData *data = [[NSFileManager defaultManager] contentsAtPath:path];
NSLog(#"size after compress video is %lu",(unsigned long)data.length);
NSLog(#"Export Complete %d %#", exportSession.status, exportSession.error);
/*
Do your neccessay stuff here after compression
*/
}
break;
case AVAssetExportSessionStatusFailed:
NSLog(#"Failed:%#",exportSession.error);
break;
case AVAssetExportSessionStatusCancelled:
NSLog(#"Canceled:%#",exportSession.error);
break;
default:
break;
}
}];}

Cache Image From URL working, but returns blank image?

I have two methods, first checks if I've already downloaded the image, and if not retrieves the image from a URL and caches it to my docs directory in my app. If it has been, it simply retrieves it, and if I have a internet connection, will re-download it. Here are the two methods:
- (UIImage *) getImageFromUserIMagesFolderInDocsWithName:(NSString *)nameOfFile
{
UIImage *image = [UIImage imageNamed:nameOfFile];
if (!image) // image doesn't exist in bundle...
{
// Get Image
NSString *cleanNameOfFile = [[[nameOfFile stringByReplacingOccurrencesOfString:#"." withString:#""]
stringByReplacingOccurrencesOfString:#":" withString:#""]
stringByReplacingOccurrencesOfString:#"/" withString:#""];
NSString *filePath = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:#"Documents/%#.png", cleanNameOfFile]];
image = [[UIImage alloc] initWithData:[NSData dataWithContentsOfFile:filePath]];
if (!image)
{
// image isn't cached
image = [UIImage imageWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:nameOfFile]]];
[self saveImageToUserImagesFolderInDocsWithName:cleanNameOfFile andImage:image];
}
else
{
// if we have a internet connection, update the cached image
/*if (isConnectedToInternet) {
image = [UIImage imageWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:nameOfFile]]];
[self saveImageToUserImagesFolderInDocsWithName:cleanNameOfFile andImage:image];
}*/
// otherwise just return it
}
}
return image;
}
Here's to save the image
- (void) saveImageToUserImagesFolderInDocsWithName:(NSString *)nameOfFile andImage:(UIImage *)image
{
NSString *pngPath = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:#"Documents/%#.png", nameOfFile]];
[UIImagePNGRepresentation(image) writeToFile:pngPath atomically:YES];
NSLog(#"directory: %#", [[UIImage alloc] initWithContentsOfFile:pngPath]);
}
The image has already been successfully downloaded and cached to my documents directory (I know because I can see it in the File system). And it successfully re loads the image the first time I call this method, but once I go to another view, and re-call this method when I come back to the same view, it's blank. Yet, the URL is correct. What's wrong here?
1) You should not be writing into a hardcoded path as you do (ie "Documents/xxx"), but rather ask for the Application Support directory, use it, and also mark files so that they don't get uploaded to iCloud (unless you want that). See this link on the specifics. Create a subfolder in it and mark it as not for iCloud backup.
2) Try changing:
image = [UIImage imageWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:nameOfFile]]];
to
image = [UIImage imageWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:filePath]]];
Maybe, you should do:
image = [UIImage imageWithContentsOfFile: nameOfFile];

Remove curled corner from qlgenerator thumbnail

How do I remove the curled icon a thumbnail create in a quicklook plugin?
Screenshot of current icon:
Screenshot of what I want:
GeneratePreviewForURL.m:
#include <CoreFoundation/CoreFoundation.h>
#include <CoreServices/CoreServices.h>
#include <QuickLook/QuickLook.h>
#import "GenerateIcon.h"
OSStatus GeneratePreviewForURL(void *thisInterface, QLPreviewRequestRef preview, CFURLRef url, CFStringRef contentTypeUTI, CFDictionaryRef options);
void CancelPreviewGeneration(void *thisInterface, QLPreviewRequestRef preview);
/* -----------------------------------------------------------------------------
Generate a preview for file
This function's job is to create preview for designated file
----------------------------------------------------------------------------- */
OSStatus GeneratePreviewForURL(void *thisInterface, QLPreviewRequestRef preview, CFURLRef url, CFStringRef contentTypeUTI, CFDictionaryRef options)
{
// To complete your generator please implement the function GeneratePreviewForURL in GeneratePreviewForURL.c
[GenerateIcon generatePreviewWithRef:preview URL:url];
return noErr;
}
void CancelPreviewGeneration(void *thisInterface, QLPreviewRequestRef preview)
{
// Implement only if supported
}
GenerateIcon.m:
//
// GenerateIcon.m
// Windows Binary Icon
//
// Created by Asger Hautop Drewsen on 2/5/12.
// Copyright (c) 2012 Asger Drewsen. All rights reserved.
//
#import "GenerateIcon.h"
#implementation GenerateIcon
+(void) generateThumbnailWithRef:(QLThumbnailRequestRef)requestRef URL:(CFURLRef)url
{
[GenerateIcon generateMultiWithThumbnailRef:requestRef PreviewRef:nil URL:url];
}
+(void) generatePreviewWithRef:(QLPreviewRequestRef)requestRef URL:(CFURLRef)url
{
[GenerateIcon generateMultiWithThumbnailRef:nil PreviewRef:requestRef URL:url];
}
+(void) generateMultiWithThumbnailRef:(QLThumbnailRequestRef)thumbnail PreviewRef:(QLPreviewRequestRef)preview URL:(CFURLRef)url
{
#autoreleasepool {
NSString * tempDir = NSTemporaryDirectory();
if (tempDir == nil)
tempDir = #"/tmp";
NSFileManager *fileManager = [[NSFileManager alloc] init];
NSString *directory = [tempDir stringByAppendingFormat: [NSString stringWithFormat:#"%#-%.0f", #"exe-icons", [NSDate timeIntervalSinceReferenceDate] * 1000.0]];
//NSString *directory = [tempDir stringByAppendingPathComponent:#"com.tyilo.exe-icons"];
/*for (NSString *file in [fileManager contentsOfDirectoryAtPath:directory error:nil])
{
[fileManager removeItemAtPath:file error:nil];
}*/
[fileManager createDirectoryAtPath:directory withIntermediateDirectories:YES attributes:nil error:nil];
[[NSTask launchedTaskWithLaunchPath:#"/usr/local/bin/wrestool" arguments:[NSArray arrayWithObjects:
#"-t",
#"group_icon",
#"-o",
directory,
#"-x",
[(__bridge NSURL *)url path],
nil]] waitUntilExit];
NSArray *icons = [fileManager contentsOfDirectoryAtPath:directory error:nil];
if (icons.count > 0)
{
NSImage *image = [[NSImage alloc] initWithContentsOfFile:[directory stringByAppendingPathComponent: [icons objectAtIndex:0]]];
NSData *thumbnailData = [image TIFFRepresentation];
CGSize size = image.size;
NSDictionary *properties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:size.width],kQLPreviewPropertyWidthKey,
[NSNumber numberWithInt:size.height],kQLPreviewPropertyHeightKey,
nil];
CGContextRef CGContext;
if (thumbnail)
{
CGContext = QLThumbnailRequestCreateContext(thumbnail, size, TRUE, (__bridge CFDictionaryRef)properties);
}
else
{
CGContext = QLPreviewRequestCreateContext(preview, size, TRUE, (__bridge CFDictionaryRef)properties);
}
if(CGContext) {
NSGraphicsContext* context = [NSGraphicsContext graphicsContextWithGraphicsPort:(void *)CGContext flipped:size.width > size.height];
if(context) {
//These two lines of code are just good safe programming…
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:context];
NSBitmapImageRep *thumbnailBitmap = [NSBitmapImageRep imageRepWithData:thumbnailData];
[thumbnailBitmap draw];
//This line sets the context back to what it was when we're done
[NSGraphicsContext restoreGraphicsState];
}
// When we are done with our drawing code QLThumbnailRequestFlushContext() is called to flush the context
if (thumbnail)
{
QLThumbnailRequestFlushContext(thumbnail, CGContext);
}
else
{
QLPreviewRequestFlushContext(preview, CGContext);
}
// Release the CGContext
CFRelease(CGContext);
}
/*NSLog(#"%#", [directory stringByAppendingPathComponent: [icons objectAtIndex:0]]);
CGImageRef image = (__bridge CGImageRef) [[NSImage alloc] initByReferencingFile:[directory stringByAppendingPathComponent: [icons objectAtIndex:0]]];
QLThumbnailRequestSetImage(thumbnail, image, properties);*/
}
else
{
NSLog(#"Failed to generate thumbnail!");
}
}
}
#end
Edit: Added screenshots.
You need to add the undocumented "IconFlavor" key to the properties dictionary that you supply to QLThumbnailRequestCreateContext() or QLThumbnailRequestSetXXX(), and give it the value 1 for minimal decoration.
See here for an example. At the top of that file are some other values I've discovered for "IconFlavour".
The aspect of your icons is automatically chosen by Quick Look and there is no public way to customize that. What is your type conformance tree?
For more information on UTIs see Uniform Type Identifiers Overview. Note that your type conformance tree won't necessarily translate to what you want from Quick Look but at least you will have a sane starting point.

Getting camera and lens from exif in Objective-c

I use that code to get exif information from jpeg file. The data not include information about camera and lens manufacturer.
- (IBAction)readThis:(id)sender
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
NSString * fileName = #"/Users/joe/img.jpg" ;
NSFileManager * fileManager = [NSFileManager defaultManager] ;
if ([fileManager fileExistsAtPath:fileName])
{
NSBitmapImageRep * imageTest = [NSBitmapImageRep imageRepWithContentsOfFile:fileName] ;
NSLog(#"Exif Data in %# : %#",fileName, [imageTest valueForProperty:#"NSImageEXIFData"]) ;
}
else
NSLog(#"File %# not found !",fileName) ;
[pool release];
}
How to get data about camera and lens manufacturer ?
You can use ImageIO framework. Refer this link for setting and getting image metadata.