I want to create the thumbnail using the CG. It creates the thumbnails.
Here i want to have the thumbnail with the size 1024 (with aspect ratio.) Is it possible to get the desired size thumbnail directly from the CG?
In the options dictionary i can pass the max size of the thumnail can be created, but is there any way to have min size for the same..?
NSURL * url = [NSURL fileURLWithPath:inPath];
CGImageSourceRef source = CGImageSourceCreateWithURL((CFURLRef)url, NULL);
CGImageRef image=nil;
if (source)
{
NSDictionary* thumbOpts = [NSDictionary dictionaryWithObjectsAndKeys:
(id) kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailWithTransform,
(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailFromImageIfAbsent,
[NSNumber numberWithInt:2048], kCGImageSourceThumbnailMaxPixelSize,
nil];
image = CGImageSourceCreateThumbnailAtIndex(source, 0, (CFDictionaryRef)thumbOpts);
NSLog(#"image width = %d %d", CGImageGetWidth(image), CGImageGetHeight(image));
CFRelease(source);
}
If you want a thumbnail with size 1024 (maximum dimension), you should be passing 1024, not 2048. Also, if you want to make sure the thumbnail is created to your specifications, you should be asking for kCGImageSourceCreateThumbnailFromImageAlways, not kCGImageSourceCreateThumbnailFromImageIfAbsent, since the latter might cause an existing thumbnail to be used, and it could be smaller than you want.
So, here's code that does what you ask:
NSURL* url = // whatever;
NSDictionary* d = [NSDictionary dictionaryWithObjectsAndKeys:
(id)kCFBooleanTrue, kCGImageSourceShouldAllowFloat,
(id)kCFBooleanTrue, kCGImageSourceCreateThumbnailWithTransform,
(id)kCFBooleanTrue, kCGImageSourceCreateThumbnailFromImageAlways,
[NSNumber numberWithInt:1024], kCGImageSourceThumbnailMaxPixelSize,
nil];
CGImageSourceRef src = CGImageSourceCreateWithURL((CFURLRef)url, NULL);
CGImageRef imref = CGImageSourceCreateThumbnailAtIndex(src, 0, (CFDictionaryRef)d);
// memory management omitted
Swift 3 version of the answer:
func loadImage(at url: URL, maxDimension max: Int) -> UIImage? {
guard let imageSource = CGImageSourceCreateWithURL(url as CFURL, nil)
else {
return nil
}
let options: [CFString: Any] = [
kCGImageSourceShouldAllowFloat: true,
kCGImageSourceCreateThumbnailWithTransform: true,
kCGImageSourceCreateThumbnailFromImageAlways: true,
kCGImageSourceThumbnailMaxPixelSize: max
]
guard let thumbnail = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, options as CFDictionary)
else {
return nil
}
return UIImage(cgImage: thumbnail)
}
Related
I have these functions called in a thread that draw a NSView:
+(NSFont *)customFontWithName:(NSString *)fontName AndSize:(float)fontSize
{
NSData *data = [[[NSDataAsset alloc]initWithName:fontName] data];
CGDataProviderRef fontProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGFontRef cgFont = CGFontCreateWithDataProvider(fontProvider);
CGDataProviderRelease(fontProvider);
NSDictionary *fontsizeAttr=[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:fontSize], NSFontSizeAttribute,
nil];
CTFontDescriptorRef fontDescriptor = CTFontDescriptorCreateWithAttributes((__bridge CFDictionaryRef)fontsizeAttr);
CTFontRef font = CTFontCreateWithGraphicsFont(cgFont, 0, NULL, fontDescriptor);
CFRelease(fontDescriptor);
CGFontRelease(cgFont);
NSFont* retval= (__bridge NSFont*)font;
CFRelease(font);
return retval;
}
and this:
+(NSAttributedString*) createCurrentTextWithString:(NSString *)string AndMaxLenght:(float)length AndMaxHeight:(float)maxHeight AndColor:(NSColor *)color AndFontName: (NSString*) fontName
{
float dim=0.1;
NSDictionary *dictionary=[NSDictionary dictionaryWithObjectsAndKeys:[CustomFont customFontWithName:fontName AndSize:dim], NSFontAttributeName,color, NSForegroundColorAttributeName, nil];
NSAttributedString * currentText=[[NSAttributedString alloc] initWithString:string attributes: dictionary];
while([currentText size].width<maxLength&&[currentText size].height<maxHeight)
{
dictionary=[NSDictionary dictionaryWithObjectsAndKeys:[CustomFont customFontWithName:fontName AndSize:dim], NSFontAttributeName,color, NSForegroundColorAttributeName, nil];
currentText=[[NSAttributedString alloc] initWithString:string attributes: retval];
dim+=0.1;
}
return currentText;
}
All the objects created in these functions were correctly deallocated and I can't find memory leaks, but this code caused an huge use of memory (many gigabytes) and I can't understand why. Please help.
I found a solution. For some reasons, that I don't know, the code:
NSData *data = [[[NSDataAsset alloc]initWithName:fontName] data];
CGDataProviderRef fontProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGFontRef cgFont = CGFontCreateWithDataProvider(fontProvider);
allocs large memory that will not be deallocate any more. So I create a static variable CGFontRef for each custom font I have. This is the only way I found:
static CGFontRef font1;
....
static CGFontRef font;
+(CGFontRef) getFontWithValue: (int) value
{
switch (value)
{
case 1:
return font1;
break;
...
case n:
return fontn;
default:
return NULL;
}
}
And
+(NSFont*) customFontWithName:(int)fontName AndSize:(float)fontSize
{
NSDictionary *fontsizeAttr=[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:fontSize], NSFontSizeAttribute,
nil];
CTFontDescriptorRef fontDescriptor = CTFontDescriptorCreateWithAttributes((__bridge CFDictionaryRef)fontsizeAttr);
CTFontRef font = CTFontCreateWithGraphicsFont([CustomFont getFontWithValue:fontName], 0, NULL, fontDescriptor);
CFRelease(fontDescriptor);
NSFont* retval= (__bridge NSFont*)font;
CFRelease(font);
return retval;
}
I still don't understand why there is this memory leak and this is not a solution, but only a trick, but it works.
I was wondering if its possible to level exposure across a set of images using either CoreImage or GPUImage. And how would I go about that?
Example:
Say you have 4 images, but the exposure is different on the third one. How could you level the exposure so all 4 images have the same exposure?
One idea I had was measuring and matching the exposure using AVCapture, i.e. if the input image is -2.0, then simply add 2.0 using CoreImage.
Another idea is to implement histogram equalization..
Has anyone ever dealt with the same task before? Any insights?
You can use CoreImage (#import ) and ImageIO (CGImage) and extract exif metadata with EXIF Dictionary Keys keys:kCGImagePropertyExifExposureTime, kCGImagePropertyExifExposureMode, kCGImagePropertyExifExposureProgram, kCGImagePropertyExifExposureBiasValue, kCGImagePropertyExifExposureIndex, kCGImagePropertyExifWhiteBalance
- (NSDictionary*) exifData: (NSString*) path {
NSDictionary* dic = nil;
NSURL* url = [NSURL fileURLWithPath: path];
if ( url )
{
CGImageSourceRef source = CGImageSourceCreateWithURL ((CFURLRef) url, NULL);
if (source != nil)
{
CFDictionaryRef metadataRef =
CGImageSourceCopyPropertiesAtIndex (source, 0, NULL);
if (metadataRef)
{
NSDictionary* immutableMetadata = (NSDictionary *)metadataRef;
if (immutableMetadata)
{
dic = [NSDictionary dictionaryWithDictionary : (NSDictionary *)metadataRef];
}
CFRelease ( metadataRef );
}
CFRelease(source);
source = nil;
}
}
return dic;
}
Usage:
NSDictionary* dic = [self exifData: path];
if (dic)
{
NSString *s = [dic valueForKey: kCGImagePropertyExifExposureTime];
NSLog(#"Image : %# - ExposureTime : %.2f", path, [s floatValue]);
}
And then changing the exposure or white balance with:
- (UIImage *) changeImageExposure:(NSString*) imagename exposure:(float) exposure {
CIImage *inputImage = [[CIImage alloc] initWithImage:[UIImage imageNamed: imagename]];
CIFilter *exposureAdjustmentFilter = [CIFilter filterWithName:#"CIExposureAdjust"];
[exposureAdjustmentFilter setValue:inputImage forKey:kCIInputImageKey];
[exposureAdjustmentFilter setValue:[NSNumber numberWithFloat:exposure] forKey:#"inputEV"];
CIImage *outputImage = exposureAdjustmentFilter.outputImage;
CIContext *context = [CIContext contextWithOptions:nil];
return [UIImage imageWithCGImage:[context createCGImage:outputImage fromRect:outputImage.extent]];
}
I am trying to overlay a recorded video with AvAssetReader and AvAssetWriter with some images. Following this tutorial, I am able to copy a video (and audio) into a new file. Now my objective is to overlay some of the initial video frames with some images with this code:
while ([assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next video sample buffer, and append it to the output file.
CMSampleBufferRef sampleBuffer = [assetReaderVideoOutput copyNextSampleBuffer];
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
EAGLContext *eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
CIContext *ciContext = [CIContext contextWithEAGLContext:eaglContext options:#{kCIContextWorkingColorSpace : [NSNull null]}];
UIFont *font = [UIFont fontWithName:#"Helvetica" size:40];
NSDictionary *attributes = #{NSFontAttributeName:font, NSForegroundColorAttributeName:[UIColor lightTextColor]};
UIImage *img = [self imageFromText:#"test" :attributes];
CIImage *filteredImage = [[CIImage alloc] initWithCGImage:img.CGImage];
[ciContext render:filteredImage toCVPixelBuffer:pixelBuffer bounds:[filteredImage extent] colorSpace:CGColorSpaceCreateDeviceRGB()];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
if (sampleBuffer != NULL)
{
BOOL success = [assetWriterVideoInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
And to create image from text:
-(UIImage *)imageFromText:(NSString *)text :(NSDictionary *)attributes{
CGSize size = [text sizeWithAttributes:attributes];
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
[text drawAtPoint:CGPointMake(0.0, 0.0) withAttributes:attributes];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
The video and audio are copied, but I haven't any text on my video.
Question 1: Why this code is not working?
Moreover, I want to be able to check the timecode of the current read frame. For example I would like to insert a text with the current timecode in the video.
I try this code following this tutorial:
AVAsset *localAsset = [AVAsset assetWithURL:mURL];
NSError *localError;
AVAssetReader *assetReader = [[AVAssetReader alloc] initWithAsset:localAsset error:&localError];
BOOL success = (assetReader != nil);
// Create asset reader output for the first timecode track of the asset
if (success) {
AVAssetTrack *timecodeTrack = nil;
// Grab first timecode track, if the asset has them
NSArray *timecodeTracks = [localAsset tracksWithMediaType:AVMediaTypeTimecode];
if ([timecodeTracks count] > 0)
timecodeTrack = [timecodeTracks objectAtIndex:0];
if (timecodeTrack) {
AVAssetReaderTrackOutput *timecodeOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:timecodeTrack outputSettings:nil];
[assetReader addOutput:timecodeOutput];
} else {
NSLog(#"%# has no timecode tracks", localAsset);
}
}
But I get the log:
[...] has no timecode tracks
Question 2: Why my video hasn't any AVMediaTypeTimecode? Ad so how can I get the current frame timecode?
Thanks for your help
I found the solutions:
To overlay video frames, you need to fix the decompression settings:
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* decompressionVideoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
// If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
To get the frame timestamp, you have to read the video informations and then use a counter to increment the current timestamp:
durationSeconds = CMTimeGetSeconds(asset.duration);
timePerFrame = 1.0 / (Float64)assetVideoTrack.nominalFrameRate;
totalFrames = durationSeconds * assetVideoTrack.nominalFrameRate;
Then in this loop
while ([assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
You can found the timestamp:
CMSampleBufferRef sampleBuffer = [assetReaderVideoOutput copyNextSampleBuffer];
if (sampleBuffer != NULL){
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (pixelBuffer) {
Float64 secondsIn = ((float)counter/totalFrames)*durationSeconds;
CMTime imageTimeEstimate = CMTimeMakeWithSeconds(secondsIn, 600);
mergeTime = CMTimeGetSeconds(imageTimeEstimate);
counter++;
}
}
I hope it could help!
I'm writing a custom Movie Recording app and have implemented AVAssetWriter and AVAssetWriterInputPixelBufferAdaptor for writing frames to a file. In the DataOutputDelegate callback I am trying to apply a CIFilter to the sampleBuffer. First I get a CVPixelBufferRef and then create a CIImage. I then apply the CIIFilter and grab the resulting CIImage:
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIFilter *hueAdjust = [CIFilter filterWithName:#"CIHueAdjust"];
[hueAdjust setDefaults];
[hueAdjust setValue: image forKey: #"inputImage"];
[hueAdjust setValue: [NSNumber numberWithFloat: 2.094]
forKey: #"inputAngle"];
CIImage *result = [hueAdjust valueForKey: #"outputImage"];
CVPixelBufferRef newBuffer = //Convert CIImage...
How would I go about converting that CIImage so I can:
[self.filteredImageWriter appendPixelBuffer:newBuffer withPresentationTime:lastSampleTime];
UPDATED ANSWER
While my previous method works, I was able to tweak it to simplify the code. It also seems to runs slightly faster.
samplePixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(samplePixelBuffer, 0); // NOT SURE IF NEEDED // NO PERFORMANCE IMPROVEMENTS IF REMOVED
NSDictionary *options = [NSDictionary dictionaryWithObject:(__bridge id)rgbColorSpace forKey:kCIImageColorSpace];
outputImage = [CIImage imageWithCVPixelBuffer:samplePixelBuffer options:options];
//-----------------
// FILTER OUTPUT IMAGE
#autoreleasepool {
outputImage = [self applyEffectToCIImage:outputImage dict:dict];
}
CVPixelBufferUnlockBaseAddress(samplePixelBuffer, 0); // NOT SURE IF NEEDED // NO PERFORMANCE IMPROVEMENTS IF REMOVED
//-----------------
// RENDER OUTPUT IMAGE BACK TO PIXEL BUFFER
[self.filterContext render:outputImage toCVPixelBuffer:samplePixelBuffer bounds:[outputImage extent] colorSpace:CGColorSpaceCreateDeviceRGB()]; // DOES NOT SEEM TO WORK USING rgbColorSpace
PREVIOUS ANSWER
I just implemented the following to get the pixel buffer from a CIImage. Make sure your pixel formats are consistent otherwise you will have color issues. Also the CFDictionaries are very important. http://allmybrain.com/2011/12/08/rendering-to-a-texture-with-ios-5-texture-cache-api/
CFDictionaryRef empty = CFDictionaryCreate(kCFAllocatorDefault, // EMPTY IOSURFACE DICT
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFMutableDictionaryRef attributes = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attributes,kCVPixelBufferIOSurfacePropertiesKey,empty);
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
outputImage.extent.size.width,
outputImage.extent.size.height,
kCVPixelFormatType_32BGRA,
attributes,
&pixelBuffer);
if (status == kCVReturnSuccess && pixelBuffer != NULL) {
CVPixelBufferLockBaseAddress(pixelBuffer, 0); // NOT SURE IF NEEDED // KEPT JUST IN CASE
[self.filterContext render:outputImage toCVPixelBuffer:pixelBuffer bounds:[outputImage extent] colorSpace:CGColorSpaceCreateDeviceRGB()];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); // NOT SURE IF NEEDED // KEPT JUST IN CASE
}
Is there any way to get the quick look preview image for a file?
I'm looking for something like this:
NSImage *image = [QuickLookPreviewer quickLookPreviewForFile:path];
See QLThumbnailRequest in the docs: https://developer.apple.com/library/mac/#documentation/UserExperience/Reference/QLThumbnailRequest_Ref/Reference/reference.html
NSURL *path = aFileUrl;
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:NO] forKey:(NSString *)kQLThumbnailOptionIconModeKey];
CGImageRef ref = QLThumbnailImageCreate(kCFAllocatorDefault, (CFURLRef)path, CGSizeMake(600, 800 /* Or whatever size you want */), (CFDictionaryRef)options);
In Swift I ended up with something like this (the force unwraps should be replaced):
let options = [
kQLThumbnailOptionIconModeKey: false
]
let ref = QLThumbnailCreate(
kCFAllocatorDefault,
url as NSURL,
CGSize(width: 150, height: 150),
options as CFDictionary
)
let thumbnail = ref!.takeRetainedValue()
let cgImageRef = QLThumbnailCopyImage(thumbnail)
let cgImage = cgImageRef!.takeRetainedValue()
let image = NSImage(cgImage: cgImage, size: CGSize(width: cgImage.width, height: cgImage.height))