Colored Artifacts while transforming PDF to NSImageRep (png) - objective-c

I want to convert different PDF pages to an png. After that I iterate through all pixels to search colored pixel. The main goal is to get the colored pages of the PDF.
For the most pages it runs great. But on some pages I have colored artifacts in red (left beside a letter) and blue (right beside a letter).
This is the original from the PDF:
(source: brebook.de)
And this is the converted letter in the png:
How can I prevent this ugly artifacts. I can't use my whole idea with this colored pixel.
This is the code which is converting the single page to a png:
//Path for saving colored Pages
NSURL *savePath = [self.homePath URLByAppendingPathComponent:#"brebook_temp/"];
//PDFDocument *thePDF = [[PDFDocument alloc] initWithURL:self.filename];
NSPDFImageRep *img = [NSPDFImageRep imageRepWithContentsOfURL:self.filename];
int coloredPages = 0;
for(int i = 0; i < self.getNumberOfPages; i++){
#autoreleasepool {
//set current page label to current page
self.currentPage = i + 1;
//set current page to i
[img setCurrentPage:i];
//create a new NSImage instance
NSImage *singlePage = [NSImage new];
//set actuall page
[singlePage addRepresentation:img];
//get "old" size
NSSize oldSize = [singlePage size];
//edit the size to 72dpi
NSSize newSize;
newSize.width = oldSize.width * 150/72;
newSize.height = oldSize.height * 150/72;
//make new image
NSImage *resizedImage = [[NSImage alloc] initWithSize: NSMakeSize(newSize.width, newSize.height)];
//write into new image
[resizedImage lockFocus];
[singlePage drawInRect: NSMakeRect(0, 0, newSize.width, newSize.height) fromRect: NSMakeRect(0, 0, oldSize.width, oldSize.height) operation: NSCompositeSourceOver fraction: 1.0];
[resizedImage unlockFocus];
//Set URL for single pages
NSURL *pageFilename = [savePath URLByAppendingPathComponent: [NSString stringWithFormat:#"Seite_%d.png", i+1]];
[[NSFileManager defaultManager] createFileAtPath: [pageFilename path]
contents:[[NSBitmapImageRep imageRepWithData:[singlePage TIFFRepresentation]]
representationUsingType:NSPNGFileType properties:nil]
attributes:nil];
if([self getColoredPixelOfImageFromURL:pageFilename] > 0){
coloredPages++;
NSLog(#"SEITE %d -----------------------------------------------------------------------------------------------", i+1);
_coloredPages = coloredPages;
[self.coloredPagesList appendString:[NSString stringWithFormat:#"%d,",i+1]];
}else{
NSError *error = nil;
[[NSFileManager defaultManager] removeItemAtURL:pageFilename error:&error];
if(error){
NSLog(#"%#", error);
}
}
resizedImage = nil;
singlePage = nil;
}
}
[self.appTimer invalidate];
Thank you so much for helping!!!

Related

objective c - AvAssetReader and Writer to overlay video

I am trying to overlay a recorded video with AvAssetReader and AvAssetWriter with some images. Following this tutorial, I am able to copy a video (and audio) into a new file. Now my objective is to overlay some of the initial video frames with some images with this code:
while ([assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next video sample buffer, and append it to the output file.
CMSampleBufferRef sampleBuffer = [assetReaderVideoOutput copyNextSampleBuffer];
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
EAGLContext *eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
CIContext *ciContext = [CIContext contextWithEAGLContext:eaglContext options:#{kCIContextWorkingColorSpace : [NSNull null]}];
UIFont *font = [UIFont fontWithName:#"Helvetica" size:40];
NSDictionary *attributes = #{NSFontAttributeName:font, NSForegroundColorAttributeName:[UIColor lightTextColor]};
UIImage *img = [self imageFromText:#"test" :attributes];
CIImage *filteredImage = [[CIImage alloc] initWithCGImage:img.CGImage];
[ciContext render:filteredImage toCVPixelBuffer:pixelBuffer bounds:[filteredImage extent] colorSpace:CGColorSpaceCreateDeviceRGB()];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
if (sampleBuffer != NULL)
{
BOOL success = [assetWriterVideoInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
And to create image from text:
-(UIImage *)imageFromText:(NSString *)text :(NSDictionary *)attributes{
CGSize size = [text sizeWithAttributes:attributes];
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
[text drawAtPoint:CGPointMake(0.0, 0.0) withAttributes:attributes];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
The video and audio are copied, but I haven't any text on my video.
Question 1: Why this code is not working?
Moreover, I want to be able to check the timecode of the current read frame. For example I would like to insert a text with the current timecode in the video.
I try this code following this tutorial:
AVAsset *localAsset = [AVAsset assetWithURL:mURL];
NSError *localError;
AVAssetReader *assetReader = [[AVAssetReader alloc] initWithAsset:localAsset error:&localError];
BOOL success = (assetReader != nil);
// Create asset reader output for the first timecode track of the asset
if (success) {
AVAssetTrack *timecodeTrack = nil;
// Grab first timecode track, if the asset has them
NSArray *timecodeTracks = [localAsset tracksWithMediaType:AVMediaTypeTimecode];
if ([timecodeTracks count] > 0)
timecodeTrack = [timecodeTracks objectAtIndex:0];
if (timecodeTrack) {
AVAssetReaderTrackOutput *timecodeOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:timecodeTrack outputSettings:nil];
[assetReader addOutput:timecodeOutput];
} else {
NSLog(#"%# has no timecode tracks", localAsset);
}
}
But I get the log:
[...] has no timecode tracks
Question 2: Why my video hasn't any AVMediaTypeTimecode? Ad so how can I get the current frame timecode?
Thanks for your help
I found the solutions:
To overlay video frames, you need to fix the decompression settings:
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* decompressionVideoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
// If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
To get the frame timestamp, you have to read the video informations and then use a counter to increment the current timestamp:
durationSeconds = CMTimeGetSeconds(asset.duration);
timePerFrame = 1.0 / (Float64)assetVideoTrack.nominalFrameRate;
totalFrames = durationSeconds * assetVideoTrack.nominalFrameRate;
Then in this loop
while ([assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
You can found the timestamp:
CMSampleBufferRef sampleBuffer = [assetReaderVideoOutput copyNextSampleBuffer];
if (sampleBuffer != NULL){
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (pixelBuffer) {
Float64 secondsIn = ((float)counter/totalFrames)*durationSeconds;
CMTime imageTimeEstimate = CMTimeMakeWithSeconds(secondsIn, 600);
mergeTime = CMTimeGetSeconds(imageTimeEstimate);
counter++;
}
}
I hope it could help!

Creating an animated GIF in Cocoa - defining frame type

I've been able to adapt some code found on SO to produce an animated GIF from the "screenshots" of my view, but the results are unpredictable. GIF frames are sometimes full images, full frames ("replace" mode, as GIMP marks it), other times are just a "diff" from previous layer ("combine" mode).
From what I've seen, when there are fewer and/or smaller frames involved, the CG writes the GIF in "combine" mode, but failing to get the colors right. Actually, the moving parts are colored correctly, the background is wrong.
When CG saves the GIF as full frames, the colors are ok. The file size is larger, but hey, obviously you cannot have the best of both worlds. :)
Is there a way to either:
a) force CG to create "full frames" when saving the GIF
b) fix the colors (color table?)
What I do is (ARC mode):
capture the visible part of the view with
[[scrollView contentView] dataWithPDFInsideRect:[[scrollView contentView] visibleRect]];
convert and resize it to NSImageBitmapRep of PNG type
-(NSMutableDictionary*) pngImageProps:(int)quality {
NSMutableDictionary *pngImageProps;
pngImageProps = [[NSMutableDictionary alloc] init];
[pngImageProps setValue:[NSNumber numberWithBool:NO] forKey:NSImageInterlaced];
double compressionF = 1;
[pngImageProps setValue:[NSNumber numberWithFloat:compressionF] forKey:NSImageCompressionFactor];
return pngImageProps;
}
-(NSData*) resizeImageToData:(NSData*)data toDimX:(int)xdim andDimY:(int)ydim withQuality:(int)quality{
NSImage *image = [[NSImage alloc] initWithData:data];
NSRect inRect = NSZeroRect;
inRect.size = [image size];
NSRect outRect = NSMakeRect(0, 0, xdim, ydim);
NSImage *outImage = [[NSImage alloc] initWithSize:outRect.size];
[outImage lockFocus];
[image drawInRect:outRect fromRect:inRect operation:NSCompositeCopy fraction:1];
NSBitmapImageRep* bitmapRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:outRect];
[outImage unlockFocus];
NSMutableDictionary *imageProps = [self pngImageProps:quality];
NSData* imageData = [bitmapRep representationUsingType:NSPNGFileType properties:imageProps];
return [imageData copy];
}
get the array of BitmapReps and create the GIF
-(CGImageRef) pngRepDataToCgImageRef:(NSData*)data {
CFDataRef imgData = (__bridge CFDataRef)data;
CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData (imgData);
CGImageRef image = CGImageCreateWithPNGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault);
return image;
}
////////// create GIF from
NSArray *images; // holds all BitmapReps
CGImageDestinationRef destination = CGImageDestinationCreateWithURL((__bridge CFURLRef)[NSURL fileURLWithPath:pot],
kUTTypeGIF,
allImages,
NULL);
// set frame delay
NSDictionary *frameProperties = [NSDictionary
dictionaryWithObject:[NSDictionary
dictionaryWithObject:[NSNumber numberWithFloat:0.2f]
forKey:(NSString *) kCGImagePropertyGIFDelayTime]
forKey:(NSString *) kCGImagePropertyGIFDictionary];
// set gif color properties
NSMutableDictionary *gifPropsDict = [[NSMutableDictionary alloc] init];
[gifPropsDict setObject:(NSString *)kCGImagePropertyColorModelRGB forKey:(NSString *)kCGImagePropertyColorModel];
[gifPropsDict setObject:[NSNumber numberWithBool:YES] forKey:(NSString *)kCGImagePropertyGIFHasGlobalColorMap];
// set gif loop
NSDictionary *gifProperties = [NSDictionary
dictionaryWithObject:gifPropsDict
forKey:(NSString *) kCGImagePropertyGIFDictionary];
// loop through frames and add them to GIF
for (int i=0; i < [images count]; i++) {
NSData *imageData = [images objectAtIndex:i];
CGImageRef imageRef = [self pngRepDataToCgImageRef:imageData];
CGImageDestinationAddImage(destination, imageRef, (__bridge CFDictionaryRef) (frameProperties));
}
// save the GIF
CGImageDestinationSetProperties(destination, (__bridge CFDictionaryRef)(gifProperties));
CGImageDestinationFinalize(destination);
CFRelease(destination);
I've checked the ImageBitmapReps, when saved as PNG individually, they are just fine.
As I understood, the color tables should be handled by CG or am I responsible to produce the dithered colors? How to do that?
Even when doing the same animation repeatedly, the GIFs produced may vary.
This is a single BitmapRep
(source: andraz.eu)
And this is the GIF with the invalid colors ("combine" mode)
(source: andraz.eu)
I read your code. Please double check the "allImages" while you are creating the CGImageDestinationRef, and the "[images count]".
the follow test code works fine:
NSDictionary *prep = [NSDictionary dictionaryWithObject:[NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:0.2f] forKey:(NSString *) kCGImagePropertyGIFDelayTime] forKey:(NSString *) kCGImagePropertyGIFDictionary];
CGImageDestinationRef dst = CGImageDestinationCreateWithURL((__bridge CFURLRef)(fileURL), kUTTypeGIF, [filesArray count], nil);
for (int i=0;i<[filesArray count];i++)
{
//load anImage from array
...
CGImageRef imageRef=[anImage CGImageForProposedRect:nil context:nil hints:nil];
CGImageDestinationAddImage(dst, imageRef,(__bridge CFDictionaryRef)(prep));
}
bool fileSave = CGImageDestinationFinalize(dst);
CFRelease(dst);

CGImageRelease causing crash

I am using AGImagePickerController to pick multiple pictures from album, and then push the selected assets to a viewController where it tries to convert each asset into an UIImage.
However, I found out that if I selected more than 20 images, I will start to get memory low warning and the app exited. Here is my code of the conversion
for(int i =0 ; i < [self.selectedPictures count] ; i++)
{
NSLog(#"Object %d",i);
ALAsset *asset = [self.selectedPictures objectAtIndex:i];
ALAssetRepresentation *rep = [asset defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
UIImage *anImage = [UIImage imageWithCGImage:iref scale:[rep scale] orientation:(UIImageOrientation)[rep orientation]];
float newHeight = anImage.size.height / (anImage.size.width / 1280);
UIImage *resizedImage = [anImage resizedImageWithContentMode:UIViewContentModeScaleAspectFit bounds:CGSizeMake(newHeight, 1280.f) interpolationQuality:kCGInterpolationHigh];
UIImage *resizedThumbnailImage = [anImage resizedImageWithContentMode:UIViewContentModeScaleAspectFill bounds:CGSizeMake(290.0f, 300.f) interpolationQuality:kCGInterpolationHigh];
// JPEG to decrease file size and enable faster uploads & downloads
NSData *imageData = UIImageJPEGRepresentation(resizedImage, 0.6f);
//NSData *thumbnailImageData = UIImagePNGRepresentation(thumbnailImage);
NSData *thumbnailImageData = UIImageJPEGRepresentation(resizedThumbnailImage, 0.6f);
PFFile *photoFile = [PFFile fileWithData:imageData];
PFFile *thumbnailFile = [PFFile fileWithData:thumbnailImageData];
[photoFile saveinbackground];
[thumbnailFile saveinbackground];
}
So i figured out that I should add CGImageRelease(iref); after anImage to release the iref, and the memory warning is gone. However, my app will crash after the last asset is converted to UIImage. And so far i could not find out why it is crashing.
You shouldn't be doing CGImageRelease(iref); unless you use CGImageCreate,
CGImageCreateCopy or CGImageRetain. That is the reason why it is crashing.
I found a way to fix this.
use #autoreleasepool

defaultRepresentation fullScreenImage on ALAsset does not return full screen image

In my application I save images to an album as assets. I want also to retrieve them and display them in full screen. I use the following code :
ALAsset *lastPicture = [scrollArray objectAtIndex:iAsset];
ALAssetRepresentation *defaultRep = [lastPicture defaultRepresentation];
UIImage *image = [UIImage imageWithCGImage:[defaultRep fullScreenImage]
scale:[defaultRep scale] orientation:
(UIImageOrientation)[defaultRep orientation]];
The problem is that the image returned is nil. I have read at the ALAssetRepresentation reference that when the image does not fit it is returned nil.
I put this image to an UIImageView which has the size of the iPad screen. I was wondering if you could help me with this issue?
Thank you in advance.
I'm not a fan of fullScreenImage or fullResolutionImage. I found that when you do this on multiple assets in a queue, even if you release the UIImage immediately, memory usage will increase dramatically while it shouldn't. Also when using fullScreenImage or fullResolutionImage, the UIImage returned is still compressed, meaning that it will be decompressed before being drawn for the first time, thus on the main thread which will block your UI.
I prefer to use this method.
-(UIImage *)fullSizeImageForAssetRepresentation:(ALAssetRepresentation *)assetRepresentation
{
UIImage *result = nil;
NSData *data = nil;
uint8_t *buffer = (uint8_t *)malloc(sizeof(uint8_t)*[assetRepresentation size]);
if (buffer != NULL) {
NSError *error = nil;
NSUInteger bytesRead = [assetRepresentation getBytes:buffer fromOffset:0 length:[assetRepresentation size] error:&error];
data = [NSData dataWithBytes:buffer length:bytesRead];
free(buffer);
}
if ([data length])
{
CGImageSourceRef sourceRef = CGImageSourceCreateWithData((__bridge CFDataRef)data, nil);
NSMutableDictionary *options = [NSMutableDictionary dictionary];
[options setObject:(id)kCFBooleanTrue forKey:(id)kCGImageSourceShouldAllowFloat];
[options setObject:(id)kCFBooleanTrue forKey:(id)kCGImageSourceCreateThumbnailFromImageAlways];
[options setObject:(id)[NSNumber numberWithFloat:640.0f] forKey:(id)kCGImageSourceThumbnailMaxPixelSize];
//[options setObject:(id)kCFBooleanTrue forKey:(id)kCGImageSourceCreateThumbnailWithTransform];
CGImageRef imageRef = CGImageSourceCreateThumbnailAtIndex(sourceRef, 0, (__bridge CFDictionaryRef)options);
if (imageRef) {
result = [UIImage imageWithCGImage:imageRef scale:[assetRepresentation scale] orientation:(UIImageOrientation)[assetRepresentation orientation]];
CGImageRelease(imageRef);
}
if (sourceRef)
CFRelease(sourceRef);
}
return result;
}
You can use it like this:
// Get the full image in a background thread
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
UIImage* image = [self fullSizeImageForAssetRepresentation:asset.defaultRepresentation];
dispatch_async(dispatch_get_main_queue(), ^{
// Do something with the UIImage
});
});

Resizing image and saving it to the specified directory path in Cocoa

Using this code I am trying to resize the selected image and then want to save it to a specific path:
-(void)processImage:(NSString*)inputPath:(int)imageWidth:(int)imageHeight:(NSString*)outputPath {
NSImage * img = [NSImage imageNamed:inputPath];
[img setSize: NSMakeSize(imageWidth,imageHeight)];
}
-(void)startProcessingImages {
int i; // Loop counter.
// Loop through all the files and process them.
for( i = 0; i < [files count]; i++ )
{
inputFilePath = [[files objectAtIndex:i] retain];
NSLog(#"filename::: %#", inputFilePath);
// Do something with the filename.
[selectedFile setStringValue:inputFilePath];
NSLog(#"selectedFile:::: %#", selectedFile);
}
NSLog(#"curdir:::::%#", inputFilePath);
NSString *aString = [[NSString stringWithFormat:#"%#%#%#", thumbnailDirPath , #"/" , fileNameNumber] retain];
fileNameJPG = [[aString stringByAppendingString:#".jpg"] retain];
[self processImage:inputFilePath: 66 :55 :thumbnailDirPath];
[self processImage:inputFilePath: 800 :600 :thumbnailDirPath];
[self processImage:inputFilePath: 320 :240 :thumbnailDirPath];
}
My issue is I am not getting that how to save it to thumbnailDirPath.
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:0.8] forKey:NSImageCompressionFactor];
NSData *tiffData = [img TIFFRepresentation];
NSData *JPEGData = [[NSBitmapImageRep imageRepWithData:tiffData] representationUsingType:NSJPEGFileType properties:options];
NSError *anError;
if (![JPEGData outputPath options:0 error:&anError])
MyLog(#"Error saving image: %# to: %#", anError, outputPath);
Check the documentation for NSJPEGFileType as it will show you the other foramt options for saving, such as PNG.
you should do export your image into file.
currently I only see how to store the TIFF image.
[[img TIFFRepresentation] writeToFile:outputPathName atomacally:NO];
Where outputPathName is the path with file name for your thumbnail file.