I am creating a photo editing app, and so far I've managed to read the metadata from image files successfully (after getting an answer to this question: Reading Camera data from EXIF while opening NSImage on OS X).
source = CGImageSourceCreateWithURL((__bridge CFURLRef)url, NULL);
NSDictionary *props = (__bridge_transfer NSDictionary *) CGImageSourceCopyPropertiesAtIndex(source, 0, NULL);
This copies all the metadata of the image file to a dictionary, and it works faily well. However, I couldn't find out how to write this metadata back to a newly created NSImage (or to an image file). Here is how I save my file (where img is an NSImage instance without metadata and self.cachedMetadata is the dictionary read from the initial image):
NSBitmapImageRep *rep = [NSBitmapImageRep imageRepWithData:[img TIFFRepresentation]];
[rep setProperty:NSImageEXIFData withValue:self.cachedMetadata];
NSData *data;
if([[fileName lowercaseString] rangeOfString:#".png"].location != NSNotFound){
data = [rep representationUsingType:NSPNGFileType properties:nil];
}else if([[fileName lowercaseString] rangeOfString:#".tif"].location != NSNotFound){
data = [rep representationUsingType:NSTIFFFileType properties:nil];
}else{ //assume jpeg
data = [rep representationUsingType:NSJPEGFileType properties:#{NSImageCompressionFactor: [NSNumber numberWithFloat:1], NSImageEXIFData: self.cachedMetadata}];
}
[data writeToFile:fileName atomically:YES];
How can I write the metadata? I used to write just EXIF for JPEG (the dictionary was EXIF-only previously) successfully but because EXIF lacked some of the fields that the initial images had (IPTC and TIFF tags) I needed to change my reading method. Now I have all the data, but I don't know how to write it to the newly-created image file.
Thanks,
Can.
Found an answer from another StackOverflow question: How do you overwrite image metadata?:
(code taken from that question itself and modified for my needs, which contains the answer to my question)
//assuming I've already loaded the image source and read the meta
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef) data, NULL);
CFStringRef imageType = CGImageSourceGetType(source);
//new empty data to write the final image data to
NSMutableData *resultData = [NSMutableData data];
CGImageDestinationRef imgDest = CGImageDestinationCreateWithData((__bridge CFMutableDataRef)(resultData), imageType, 1, NULL);
//copy image data
CGImageDestinationAddImageFromSource(imgDest, source, 0, (__bridge CFDictionaryRef)(window.cachedMetadata));
BOOL success = CGImageDestinationFinalize(imgDest);
This worked perfectly for writing back all the metadata including EXIF, TIFF, and IPTC.
You could try either using, or looking at the code in, the "exiv2" tool maybe.
Related
before posting this question here, i have read all the materials and similar posts on it but i cant get the main "idea" what is happening and how to fix it, in 10 of the similar question, everyone was fixing this problem with #autoreleasepool in this case i was unable to achive my goal. So while converting cvMat to UIImage i have increasing memory depending on size.
Below are step which i am doing before converting mat to uiimage:
cv::Mat undistorted = cv::Mat(cvSize(maxWidth,maxHeight), CV_8UC1);
cv::Mat original = [MatStructure convertUIImageToMat:adjustedImage];
cv::warpPerspective(original, undistorted, cv::getPerspectiveTransform(src, dst), cvSize(maxWidth, maxHeight));
original.release();
adjustedImage = [MatStructure convertMatToUIImage:undistorted];
undistorted.release();
problem is visible while i am converting my mat to uiimage, memory goes up to 400 mb and on every cycle it rises.
+ (UIImage *) convertMatToUIImage: (cv::Mat) cvMat {
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef) data);
CGBitmapInfo bmInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
CGImageRef imageRef = CGImageCreate(cvMat.cols, // width
cvMat.rows, // height
8, // bits per component
8 * cvMat.elemSize(), // bits per pixel
cvMat.step.p[0], // bytesPerRow
colorSpace, // colorspace
bmInfo, // bitmap info
provider, // CGDataProviderRef
NULL, // decode
false, // should interpolate
kCGRenderingIntentDefault // intent
);
UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
cvMat.release(); // this line is optional.
return image;
}
I have seen many similar code but every single example works as this one.
I belive that problem holds in (__bridge CFDataRef) and ARC cant clean up this data, if i will try to CFRelease((__bridge CFDataRef)data) than will happen crash because program will search for allocated memory and it will be freed already so it will run to crash.
I am using openCV3 and have tried their method MatToUIImage but problem still exsits, on leaks profiler there are no leaks at all, and most expensive task in memory is convertMatToUIImage.
I am reading all day about it but actually can't find any useful solution yet.
Currently i work on swift 3.0 which inherits class XXX and it uses objC class to crop something and than return to UIImage as well. In deinit i am assigning this inherited class property nil, but problem still exsists.Also i think that dataWithBytes is duplicating memory like if i have 16MB at start after creating NSData it will be 32MB..
And please if you can suggests useful threads about this problem i will be glad to read them all. Thanks for help
After working on this problem more than three days, i had to rewrite function and it worked 100%, i have tested on five different devices.
CFRelease, Free() and #autoreleasepool did not helped me at all and i implemented this:
data = UIImageJPEGRepresentation([[UIImage alloc] initWithCGImage:imageRef], 0.2f); // because images are 30MB and up
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *appFile = [documentsDirectory stringByAppendingPathComponent:#"MyFile.jpeg"];
[data writeToFile:appFile atomically:NO];
data = nil;
after this solution everything worked fine. So i grab the UIImage and converting to NSData, after that we should save it to the local directory and the only thing left is to read the data from directory. hope this thread will help someone one day.
I'm using some tricks to try to read the raw output of an AVAssetWriter while it is being written to disk. When I reassemble the individual files by concatenating them, the resulting file is the same exact number of bytes as the AVAssetWriter's output file. However, the reassembled file will not play in QuickTime or be parsed by FFmpeg because there is data corruption. A few bytes here and there have been changed, rendering the resulting file unusable. I assume this is occurring on the EOF boundary of each read, but it isn't consistent corruption.
I plan to eventually use code similar to this to parse out individual H.264 NAL units from the encoder to packetize them and send them over RTP, however if I can't trust the data being read from disk I might have to use another solution.
Is there an explanation/fix for this data corruption? And are there any other resources/links you have found on how to parse the NAL units to packetize over RTP?
Full code here: AVAppleEncoder.m
// Modified from
// http://www.davidhamrick.com/2011/10/13/Monitoring-Files-With-GCD-Being-Edited-With-A-Text-Editor.html
- (void)watchOutputFileHandle
{
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
int fildes = open([[movieURL path] UTF8String], O_EVTONLY);
source = dispatch_source_create(DISPATCH_SOURCE_TYPE_VNODE,fildes,
DISPATCH_VNODE_DELETE | DISPATCH_VNODE_WRITE | DISPATCH_VNODE_EXTEND | DISPATCH_VNODE_ATTRIB | DISPATCH_VNODE_LINK | DISPATCH_VNODE_RENAME | DISPATCH_VNODE_REVOKE,
queue);
dispatch_source_set_event_handler(source, ^
{
unsigned long flags = dispatch_source_get_data(source);
if(flags & DISPATCH_VNODE_DELETE)
{
dispatch_source_cancel(source);
//[blockSelf watchStyleSheet:path];
}
if(flags & DISPATCH_VNODE_EXTEND)
{
//NSLog(#"File size changed");
NSError *error = nil;
NSFileHandle *fileHandle = [NSFileHandle fileHandleForReadingFromURL:movieURL error:&error];
if (error) {
[self showError:error];
}
[fileHandle seekToFileOffset:fileOffset];
NSData *newData = [fileHandle readDataToEndOfFile];
if ([newData length] > 0) {
NSLog(#"newData (%lld): %d bytes", fileOffset, [newData length]);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
NSString *movieName = [NSString stringWithFormat:#"%d.%lld.%d.mp4", fileNumber, fileOffset, [newData length]];
NSString *path = [NSString stringWithFormat:#"%#/%#", basePath, movieName];
[newData writeToFile:path atomically:NO];
fileNumber++;
fileOffset = [fileHandle offsetInFile];
}
}
});
dispatch_source_set_cancel_handler(source, ^(void)
{
close(fildes);
});
dispatch_resume(source);
}
Here are some similar questions I have found, but don't exactly answer my question:
Get PTS from raw H264 mdat generated by iOS AVAssetWriter
streaming video FROM an iPhone
Parsing h.264 NAL units from a quicktime MOV file
Realtime Audio/Video Streaming FROM iPhone to another device (Browser, or iPhone)
When I eventually figure this out, I will release an open source library to assist people who try to do this in the future.
Thank you!
Update: The corruption doesn't happen at the EOF boundary. It seems like parts of the file are re-written after finishWriting is called. This first file was chunked at 4KB, so the area changed isn't anywhere near an EOF boundary. It seems to be corrupted near new "moov" elements as well when movieFragmentInterval is enabled.
Correct file on the left, broken file on the right.
I ended up abandoning the "read while it's written" approach in favor of a manual chunking approach where I call finishWriting every 5 seconds on a background thread. I was able to drop a negligible number of frames using a method originally described here:
- (void) segmentRecording:(NSTimer*)timer {
AVAssetWriter *tempAssetWriter = self.assetWriter;
AVAssetWriterInput *tempAudioEncoder = self.audioEncoder;
AVAssetWriterInput *tempVideoEncoder = self.videoEncoder;
self.assetWriter = queuedAssetWriter;
self.audioEncoder = queuedAudioEncoder;
self.videoEncoder = queuedVideoEncoder;
//NSLog(#"Switching encoders");
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
[tempAudioEncoder markAsFinished];
[tempVideoEncoder markAsFinished];
if (tempAssetWriter.status == AVAssetWriterStatusWriting) {
if(![tempAssetWriter finishWriting]) {
[self showError:[tempAssetWriter error]];
}
}
if (self.readyToRecordAudio && self.readyToRecordVideo) {
NSError *error = nil;
self.queuedAssetWriter = [[AVAssetWriter alloc] initWithURL:[self newMovieURL] fileType:(NSString *)kUTTypeMPEG4 error:&error];
if (error) {
[self showError:error];
}
self.queuedVideoEncoder = [self setupVideoEncoderWithAssetWriter:self.queuedAssetWriter formatDescription:videoFormatDescription bitsPerSecond:videoBPS];
self.queuedAudioEncoder = [self setupAudioEncoderWithAssetWriter:self.queuedAssetWriter formatDescription:audioFormatDescription bitsPerSecond:audioBPS];
//NSLog(#"Encoder switch finished");
}
});
}
Full source code: https://github.com/chrisballinger/FFmpeg-iOS-Encoder/blob/master/AVSegmentingAppleEncoder.m
When reading a MOV file that is ACTIVELY recording on iOS, you MUST check the 4 bytes mentioned for changes, and re-write this four bytes, then check for additional data in file, and send additional data. Then when done, truncate the file to the file size written.
Obviously this depends on where you are sending the file. I use a send (offset,number of bytes) to receiver. So I send "additional data", "more additional data", ... , new data at (24,4), "more additional data".
Typically iOS only writes the 4 byte (size of data section) record when file is about to be closed (aka after last media write). (see info on "Quicktime atoms"). Unfortunately, this also means the MOV file is not PLAYABLE until recording is completed (and movie descriptors written at END of file).
in my ios app i am listing out some data in scroll view. When a button is clicked the datas in the page are been generated in pdf file format. Following is the bit of code.
- (void)createPDFfromUIView:(UIView*)aView saveToDocumentsWithFileName:(NSString*)aFilename{
// Creates a mutable data object for updating with binary data, like a byte array
NSMutableData *pdfData = [NSMutableData data];
// Points the pdf converter to the mutable data object and to the UIView to be converted
UIGraphicsBeginPDFContextToData(pdfData, aView.bounds, nil);
UIGraphicsBeginPDFPage();
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
// draws rect to the view and thus this is captured by UIGraphicsBeginPDFContextToData
[aView.layer renderInContext:pdfContext];
// remove PDF rendering context
UIGraphicsEndPDFContext();
// Retrieves the document directories from the iOS device
NSArray* documentDirectories = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask,YES);
NSString* documentDirectory = [documentDirectories objectAtIndex:0];
NSString* documentDirectoryFilename = [documentDirectory stringByAppendingPathComponent:aFilename];
// instructs the mutable data object to write its context to a file on disk
[pdfData writeToFile:documentDirectoryFilename atomically:YES];
NSLog(#"documentDirectoryFileName: %#",documentDirectoryFilename);
}
From the above code, i am able to generate a pdf file, but i am facing the following issue.
If i have more than 50 data, in my app i am able to see only the first 10 data and to see the others i need to scroll them. In this case when the pdf file gets generated it is taking only the data of first 10 and not the others. How to get the other data too
renderInContext: will only render what you currently see - UIScrollView is smart and will only have the views in the hierarchy that you need right now. You need a for-loop to manually scroll and redraw. This might be tricky to get right, maybe call [aView layoutSubviews] manually after scrolling to force the scrollView to rearrange the subviews instantly.
as steipete said, you can use loop to renderIncontext
- You need to calculate num of loop
numOfLoop = webview.ScrollView.ContentSize.Height/weview.frame.size.height
use loop to renderInContext
for(int i = 0; i < numOfLopp; i ++)
1.UIGraphicsBeginPDFPage()
get current context
renderIncontext
This way run correct in my app but I am getting with BIG issue when the numOfLoop lager
- you will get slow to renderInContext
- Memory will be increase -> Close app
In this way, How can I remove/release context when I am in once loop ?
Other way, you can use https://github.com/iclems/iOS-htmltopdf/blob/master/NDHTMLtoPDF.h
to generate to PDF
#steipete, #all Hi i tried the below code it generate multipage PDF, but the visible area in view is repeated for all the pages. please add any further ideas to improve the result.
-(void)createPDFfromUIView:(UIWebView*)aView saveToDocumentsWithFileName:(NSString*)aFilename
{
NSString *oldFilePath= [[NSBundle mainBundle] pathForResource:#"ABC" ofType:#"pdf"];
NSMutableData *pdfData = [NSMutableData data];
UIGraphicsBeginPDFContextToData(pdfData, aView.bounds, nil);
CFURLRef url = CFURLCreateWithFileSystemPath (NULL, (CFStringRef)oldFilePath, kCFURLPOSIXPathStyle, 0);
CGPDFDocumentRef templateDocument = CGPDFDocumentCreateWithURL(url);
CFRelease(url); size_t count = CGPDFDocumentGetNumberOfPages(templateDocument);
for (size_t pageNumber = 1; pageNumber <= count; pageNumber++)
{
UIGraphicsBeginPDFPage();
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
[aView.layer renderInContext:pdfContext];
}
CGPDFDocumentRelease(templateDocument);
// remove PDF rendering context UIGraphicsEndPDFContext();
// Retrieves the document directories from the iOS device
NSArray* documentDirectories = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask,YES);
NSString* documentDirectory = [documentDirectories objectAtIndex:0];
NSString* documentDirectoryFilename = [documentDirectory stringByAppendingPathComponent:aFilename];
[pdfData writeToFile:documentDirectoryFilename atomically:YES];
NSLog(#"documentDirectoryFileName: %#",documentDirectoryFilename);
}
Check ScrollViewToPDF example
It uses same scrollview's layer renderInContext but here PDF is created according to your requirement such as one page PDF or multiple page PDF
Note : It captures all visible as well as invisible part of scrollView
So, this is what I need :
Take an NSImage
Compress it as much as possible (loseless compress, no obvious drop in quality)
Save it back to disk
In the past, I tried using OptiPNG - as compiled binary (and getting results asynchronously) - which worked.
However, what if I don't want a terminal-like solution with external apps, but something more Cocoa-native?
Any ideas?
NSImage* image; // your NSImage
// With compression
NSData* data = [image
TIFFRepresentationUsingCompression:NSTIFFCompressionLZW
factor:0.5f]; // you can change factor between 0 and 1.0 for preferred compression rate
if (data != nil) {
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithData:data];
if (bitmap != nil) {
NSData* bitmapData = [bitmap
representationUsingType:NSBitmapImageFileTypePNG
properties:#{}];
if (bitmapData != nil) {
[bitmapData
writeToFile:<file_path> //path where to save png
atomically:true];
}
}
}
Update: As mentioned in comments, this does not compress png.
However I have used this code in ShareExtension and it compress received screenshots well (from 1.1-1.5MB if save in plain to 0.4-0.7MB using representationUsingType) and without quality loss, just like OP asked.
On CGImageDestinationRef Apple's reference it mentions that It can contain thumbnail images as well as properties for each image. The part with the properties works nice, by setting the properties parameter when adding a image to the CGImageDestinationRef, but I can't figure out how to add the thumbnail.
I want to add a thumbnail (or pass it directly from a CGImageSourceRef) to CGImageDestinationRef, such that when the resulting image is opened with CGImageSourceRef I can use CGImageSourceCreateThumbnailAtIndex to retrieve the original thumbnail.
NSDictionary* thumbOpts = [NSDictionary dictionaryWithObjectsAndKeys:
(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailWithTransform,
(id)kCFBooleanFalse, (id)kCGImageSourceCreateThumbnailFromImageIfAbsent,
[NSNumber numberWithInt:128], (id)kCGImageSourceThumbnailMaxPixelSize,
nil];
CGImageRef thumb = CGImageSourceCreateThumbnailAtIndex(iSource, 0, (CFDictionaryRef)thumbOpts);
The code above returns the thumbnail of the images that have it stored, but when I pass the image through CGImageDestinationRef, the thumbnail is lost.
Is there some way to keep the original thumbnail (or create a new one)? CGImageProperties Reference doesn't seem to have any keys where to store the thumbnail.
One way I've found to get a thumbnail back is to specify kCGImageSourceCreateThumbnailFromImageIfAbsent in the settings dictionary.
Specify the dictionary as follows
NSDictionary* thumbOpts = [NSDictionary dictionaryWithObjectsAndKeys:
(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailFromImageIfAbsent,
(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailWithTransform,
[NSNumber numberWithInt:128], (id)kCGImageSourceThumbnailMaxPixelSize,
nil];
Update: In order to add a secondary (thumbnail) image to the original CGImageDestinationRef do the following after you've added the primary image
size_t thumbWidth = 640;
size_t thumbHeight = imageHeight / (imageWidth / thumbWidth);
size_t thumbBytesPerRow = thumbWidth * 4;
size_t thumbTotalBytes = thumbBytesPerRow * thumbHeight;
void *thumbData = malloc(thumbTotalBytes);
CGContextRef thumbContext = CGBitmapContextCreate(thumbData,
thumbWidth,
thumbHeight,
8,
thumbBytesPerRow,
CGColorSpaceCreateDeviceRGB(),
kCGImageAlphaNoneSkipFirst);
CGRect thumbRect = CGRectMake(0.f, 0.f, thumbWidth, thumbHeight);
CGContextDrawImage(thumbContext, thumbRect, fullImage);
CGImageRef thumbImage = CGBitmapContextCreateImage(thumbContext);
CGContextRelease(thumbContext);
CGImageDestinationAddImage(destination, thumbImage, nil);
CGImageRelease(thumbImage);
Then in order to get the thumbnail back out, do the following
CFURLRef imageFileURLRef = (__bridge CFURLRef)url;
NSDictionary* sourceOptions = #{(id)kCGImageSourceShouldCache: (id)kCFBooleanFalse,
(id)kCGImageSourceTypeIdentifierHint: (id)kUTTypeTIFF};
CFDictionaryRef sourceOptionsRef = (__bridge CFDictionaryRef)sourceOptions;
CGImageSourceRef imageSource = CGImageSourceCreateWithURL(imageFileURLRef, sourceOptionsRef);
CGImageRef thumb = CGImageSourceCreateImageAtIndex(imageSource, 1, sourceOptionsRef);
UIImage *thumbImage = [[UIImage alloc] initWithCGImage:thumb];
CFRelease(imageSource);
CGImageRelease(thumb);
Ok, so since no one responded (even during bounty), plus after the hours I spent trying to pass the thumbnail, my guess is that the documentation for CGImageDestination is misleading. There doesn't seem to be any (documented) way to pass the image thumbnail to a CGImageDestinationRef (atleast not up to, and including, OSX 10.7.0 and iOS 5.0).
If anyone thinks otherwise, post an answer and I'll accept it (if it's correct).
There is a half-documented property for embedding EXIF thumbnails on iOS 8.0+: kCGImageDestinationEmbedThumbnail.
Apple's developer doc is empty, but the header file (CGImageDestination.h) contains the following comment:
/* Enable or disable thumbnail embedding for JPEG and HEIF.
* The value should be kCFBooleanTrue or kCFBooleanFalse. Defaults to kCFBooleanFalse */
IMAGEIO_EXTERN const CFStringRef kCGImageDestinationEmbedThumbnail IMAGEIO_AVAILABLE_STARTING(__MAC_10_10, __IPHONE_8_0);
However you can't specify arbitrary thumbail data, it will get automatically created for you.