I'm writing a custom Movie Recording app and have implemented AVAssetWriter and AVAssetWriterInputPixelBufferAdaptor for writing frames to a file. In the DataOutputDelegate callback I am trying to apply a CIFilter to the sampleBuffer. First I get a CVPixelBufferRef and then create a CIImage. I then apply the CIIFilter and grab the resulting CIImage:
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIFilter *hueAdjust = [CIFilter filterWithName:#"CIHueAdjust"];
[hueAdjust setDefaults];
[hueAdjust setValue: image forKey: #"inputImage"];
[hueAdjust setValue: [NSNumber numberWithFloat: 2.094]
forKey: #"inputAngle"];
CIImage *result = [hueAdjust valueForKey: #"outputImage"];
CVPixelBufferRef newBuffer = //Convert CIImage...
How would I go about converting that CIImage so I can:
[self.filteredImageWriter appendPixelBuffer:newBuffer withPresentationTime:lastSampleTime];
UPDATED ANSWER
While my previous method works, I was able to tweak it to simplify the code. It also seems to runs slightly faster.
samplePixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(samplePixelBuffer, 0); // NOT SURE IF NEEDED // NO PERFORMANCE IMPROVEMENTS IF REMOVED
NSDictionary *options = [NSDictionary dictionaryWithObject:(__bridge id)rgbColorSpace forKey:kCIImageColorSpace];
outputImage = [CIImage imageWithCVPixelBuffer:samplePixelBuffer options:options];
//-----------------
// FILTER OUTPUT IMAGE
#autoreleasepool {
outputImage = [self applyEffectToCIImage:outputImage dict:dict];
}
CVPixelBufferUnlockBaseAddress(samplePixelBuffer, 0); // NOT SURE IF NEEDED // NO PERFORMANCE IMPROVEMENTS IF REMOVED
//-----------------
// RENDER OUTPUT IMAGE BACK TO PIXEL BUFFER
[self.filterContext render:outputImage toCVPixelBuffer:samplePixelBuffer bounds:[outputImage extent] colorSpace:CGColorSpaceCreateDeviceRGB()]; // DOES NOT SEEM TO WORK USING rgbColorSpace
PREVIOUS ANSWER
I just implemented the following to get the pixel buffer from a CIImage. Make sure your pixel formats are consistent otherwise you will have color issues. Also the CFDictionaries are very important. http://allmybrain.com/2011/12/08/rendering-to-a-texture-with-ios-5-texture-cache-api/
CFDictionaryRef empty = CFDictionaryCreate(kCFAllocatorDefault, // EMPTY IOSURFACE DICT
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFMutableDictionaryRef attributes = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attributes,kCVPixelBufferIOSurfacePropertiesKey,empty);
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
outputImage.extent.size.width,
outputImage.extent.size.height,
kCVPixelFormatType_32BGRA,
attributes,
&pixelBuffer);
if (status == kCVReturnSuccess && pixelBuffer != NULL) {
CVPixelBufferLockBaseAddress(pixelBuffer, 0); // NOT SURE IF NEEDED // KEPT JUST IN CASE
[self.filterContext render:outputImage toCVPixelBuffer:pixelBuffer bounds:[outputImage extent] colorSpace:CGColorSpaceCreateDeviceRGB()];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); // NOT SURE IF NEEDED // KEPT JUST IN CASE
}
Related
I am trying to overlay a recorded video with AvAssetReader and AvAssetWriter with some images. Following this tutorial, I am able to copy a video (and audio) into a new file. Now my objective is to overlay some of the initial video frames with some images with this code:
while ([assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next video sample buffer, and append it to the output file.
CMSampleBufferRef sampleBuffer = [assetReaderVideoOutput copyNextSampleBuffer];
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
EAGLContext *eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
CIContext *ciContext = [CIContext contextWithEAGLContext:eaglContext options:#{kCIContextWorkingColorSpace : [NSNull null]}];
UIFont *font = [UIFont fontWithName:#"Helvetica" size:40];
NSDictionary *attributes = #{NSFontAttributeName:font, NSForegroundColorAttributeName:[UIColor lightTextColor]};
UIImage *img = [self imageFromText:#"test" :attributes];
CIImage *filteredImage = [[CIImage alloc] initWithCGImage:img.CGImage];
[ciContext render:filteredImage toCVPixelBuffer:pixelBuffer bounds:[filteredImage extent] colorSpace:CGColorSpaceCreateDeviceRGB()];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
if (sampleBuffer != NULL)
{
BOOL success = [assetWriterVideoInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
And to create image from text:
-(UIImage *)imageFromText:(NSString *)text :(NSDictionary *)attributes{
CGSize size = [text sizeWithAttributes:attributes];
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
[text drawAtPoint:CGPointMake(0.0, 0.0) withAttributes:attributes];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
The video and audio are copied, but I haven't any text on my video.
Question 1: Why this code is not working?
Moreover, I want to be able to check the timecode of the current read frame. For example I would like to insert a text with the current timecode in the video.
I try this code following this tutorial:
AVAsset *localAsset = [AVAsset assetWithURL:mURL];
NSError *localError;
AVAssetReader *assetReader = [[AVAssetReader alloc] initWithAsset:localAsset error:&localError];
BOOL success = (assetReader != nil);
// Create asset reader output for the first timecode track of the asset
if (success) {
AVAssetTrack *timecodeTrack = nil;
// Grab first timecode track, if the asset has them
NSArray *timecodeTracks = [localAsset tracksWithMediaType:AVMediaTypeTimecode];
if ([timecodeTracks count] > 0)
timecodeTrack = [timecodeTracks objectAtIndex:0];
if (timecodeTrack) {
AVAssetReaderTrackOutput *timecodeOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:timecodeTrack outputSettings:nil];
[assetReader addOutput:timecodeOutput];
} else {
NSLog(#"%# has no timecode tracks", localAsset);
}
}
But I get the log:
[...] has no timecode tracks
Question 2: Why my video hasn't any AVMediaTypeTimecode? Ad so how can I get the current frame timecode?
Thanks for your help
I found the solutions:
To overlay video frames, you need to fix the decompression settings:
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* decompressionVideoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
// If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
To get the frame timestamp, you have to read the video informations and then use a counter to increment the current timestamp:
durationSeconds = CMTimeGetSeconds(asset.duration);
timePerFrame = 1.0 / (Float64)assetVideoTrack.nominalFrameRate;
totalFrames = durationSeconds * assetVideoTrack.nominalFrameRate;
Then in this loop
while ([assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
You can found the timestamp:
CMSampleBufferRef sampleBuffer = [assetReaderVideoOutput copyNextSampleBuffer];
if (sampleBuffer != NULL){
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (pixelBuffer) {
Float64 secondsIn = ((float)counter/totalFrames)*durationSeconds;
CMTime imageTimeEstimate = CMTimeMakeWithSeconds(secondsIn, 600);
mergeTime = CMTimeGetSeconds(imageTimeEstimate);
counter++;
}
}
I hope it could help!
I am writing an OS X application that will create a video using a series of images. It was developed using code from here: Make movie file with picture Array and song file, using AVAsset, but not including the audio portion.
The code functions and creates an mpg file.
The problem is the memory pressure. It doesn't appear to free up any memory. Using XCode Instruments I found the biggest culprits are:
CVPixelBufferCreate
[image TIFFRepresentation];
CGImageSourceCreateWithData
CGImageSourceCreateImageAtIndex
I tried adding code to release, but the ARC should already being doing that.
Eventually OS X will hang and or crash.
Not sure how to handle the memory issue. There are no mallocs in the code.
I'm open to suggestions. It appears that many others have used this same code.
This is the code that is based on the link above:
- (void)ProcessImagesToVideoFile:(NSError **)error_p size:(NSSize)size videoFilePath:(NSString *)videoFilePath jpegs:(NSMutableArray *)jpegs fileLocation:(NSString *)fileLocation
{
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:videoFilePath]
fileType:AVFileTypeMPEG4
error:&(*error_p)];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(videoWriterInput);
NSParameterAssert([videoWriter canAddInput:videoWriterInput]);
videoWriterInput.expectsMediaDataInRealTime = YES;
[videoWriter addInput:videoWriterInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
//Write all picture array in movie file.
int frameCount = 0;
for(int i = 0; i<[jpegs count]; i++)
{
NSString *filePath = [NSString stringWithFormat:#"%#%#", fileLocation, [jpegs objectAtIndex:i]];
NSImage *jpegImage = [[NSImage alloc ]initWithContentsOfFile:filePath];
CMTime frameTime = CMTimeMake(frameCount,(int32_t) 24);
BOOL append_ok = NO;
int j = 0;
while (!append_ok && j < 30)
{
if (adaptor.assetWriterInput.readyForMoreMediaData)
{
if ((frameCount % 25) == 0)
{
NSLog(#"appending %d to %# attemp %d\n", frameCount, videoFilePath, j);
}
buffer = [self pixelBufferFromCGImage:jpegImage andSize:size];
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if (append_ok == NO) //failes on 3GS, but works on iphone 4
{
NSLog(#"failed to append buffer");
NSLog(#"The error is %#", [videoWriter error]);
}
//CVPixelBufferPoolRef bufferPool = adaptor.pixelBufferPool;
//NSParameterAssert(bufferPool != NULL);
if(buffer)
{
CVPixelBufferRelease(buffer);
//CVBufferRelease(buffer);
}
}
else
{
printf("adaptor not ready %d, %d\n", frameCount, j);
[NSThread sleepForTimeInterval:0.1];
}
j++;
}
if (!append_ok)
{
printf("error appending image %d times %d\n", frameCount, j);
}
frameCount++;
//CVBufferRelease(buffer);
jpegImage = nil;
buffer = nil;
}
//Finish writing picture:
[videoWriterInput markAsFinished];
[videoWriter finishWritingWithCompletionHandler:^(){
NSLog (#"finished writing");
}];
}
- (CVPixelBufferRef) pixelBufferFromCGImage: (NSImage *) image andSize:(CGSize) size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
size.width,
size.height,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height,
8, 4*size.width, rgbColorSpace,
kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGImageRef imageRef = [self nsImageToCGImageRef:image];
CGRect imageRect = CGRectMake(0, 0, CGImageGetWidth(imageRef), CGImageGetHeight(imageRef));
CGContextDrawImage(context, imageRect, imageRef);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
imageRef = nil;
context = nil;
rgbColorSpace = nil;
return pxbuffer;
}
- (CGImageRef)nsImageToCGImageRef:(NSImage*)image;
{
NSData * imageData = [image TIFFRepresentation];// memory hog
CGImageRef imageRef;
if(!imageData) return nil;
CGImageSourceRef imageSource = CGImageSourceCreateWithData((__bridge CFDataRef)imageData, NULL);
imageRef = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL);
imageData = nil;
imageSource = nil;
return imageRef;
}
Your code is using ARC but the libraries you are calling might not be using ARC. They might be relying on the older autorelease pool system to free up memory.
You should have a read how it works, this is fundamental stuff that every Obj-C developer needs to memorise, but basically any object can be added to the current "pool" of objects, which will be released when the pool is released.
By default, the pool on the main thread is emptied each time the app enters an idle state. This usually works fine, since the main thread should never be busy for more than few hundredths of a second and you can't really build up much memory in that amount of time.
When you do a lengthy and memory intensive operation you need to manually setup an autorelease pool, which is most commonly put inside a for or while loop (although you can actually put them anywhere you want, that's just the most useful scenario):
for ( ... ) {
#autoreleasepool {
// do somestuff
}
}
Also, ARC is only for Objective C code. It does not apply to objects created by C functions like CGColorSpaceCreateDeviceRGB() and CVPixelBufferCreate(). Make sure you are manually releasing all of those.
ARC works only for retainable object pointers. ARC documentation defines them as
A retainable object pointer (or “retainable pointer”) is a value of a
retainable object pointer type (“retainable type”). There are three
kinds of retainable object pointer types:
block pointers (formed by applying the caret (^) declarator sigil to a
function type)
Objective-C object pointers (id, Class, NSFoo*, etc.)
typedefs marked with attribute((NSObject)) Other pointer types,
such as int* and CFStringRef, are not subject to ARC’s semantics and
restrictions.
You already explicitly call release here
CGContextRelease(context);
You should do the same for other objects. Like
CVPixelBufferRelease(pxbuffer);
for pxbuffer
I've been able to adapt some code found on SO to produce an animated GIF from the "screenshots" of my view, but the results are unpredictable. GIF frames are sometimes full images, full frames ("replace" mode, as GIMP marks it), other times are just a "diff" from previous layer ("combine" mode).
From what I've seen, when there are fewer and/or smaller frames involved, the CG writes the GIF in "combine" mode, but failing to get the colors right. Actually, the moving parts are colored correctly, the background is wrong.
When CG saves the GIF as full frames, the colors are ok. The file size is larger, but hey, obviously you cannot have the best of both worlds. :)
Is there a way to either:
a) force CG to create "full frames" when saving the GIF
b) fix the colors (color table?)
What I do is (ARC mode):
capture the visible part of the view with
[[scrollView contentView] dataWithPDFInsideRect:[[scrollView contentView] visibleRect]];
convert and resize it to NSImageBitmapRep of PNG type
-(NSMutableDictionary*) pngImageProps:(int)quality {
NSMutableDictionary *pngImageProps;
pngImageProps = [[NSMutableDictionary alloc] init];
[pngImageProps setValue:[NSNumber numberWithBool:NO] forKey:NSImageInterlaced];
double compressionF = 1;
[pngImageProps setValue:[NSNumber numberWithFloat:compressionF] forKey:NSImageCompressionFactor];
return pngImageProps;
}
-(NSData*) resizeImageToData:(NSData*)data toDimX:(int)xdim andDimY:(int)ydim withQuality:(int)quality{
NSImage *image = [[NSImage alloc] initWithData:data];
NSRect inRect = NSZeroRect;
inRect.size = [image size];
NSRect outRect = NSMakeRect(0, 0, xdim, ydim);
NSImage *outImage = [[NSImage alloc] initWithSize:outRect.size];
[outImage lockFocus];
[image drawInRect:outRect fromRect:inRect operation:NSCompositeCopy fraction:1];
NSBitmapImageRep* bitmapRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:outRect];
[outImage unlockFocus];
NSMutableDictionary *imageProps = [self pngImageProps:quality];
NSData* imageData = [bitmapRep representationUsingType:NSPNGFileType properties:imageProps];
return [imageData copy];
}
get the array of BitmapReps and create the GIF
-(CGImageRef) pngRepDataToCgImageRef:(NSData*)data {
CFDataRef imgData = (__bridge CFDataRef)data;
CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData (imgData);
CGImageRef image = CGImageCreateWithPNGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault);
return image;
}
////////// create GIF from
NSArray *images; // holds all BitmapReps
CGImageDestinationRef destination = CGImageDestinationCreateWithURL((__bridge CFURLRef)[NSURL fileURLWithPath:pot],
kUTTypeGIF,
allImages,
NULL);
// set frame delay
NSDictionary *frameProperties = [NSDictionary
dictionaryWithObject:[NSDictionary
dictionaryWithObject:[NSNumber numberWithFloat:0.2f]
forKey:(NSString *) kCGImagePropertyGIFDelayTime]
forKey:(NSString *) kCGImagePropertyGIFDictionary];
// set gif color properties
NSMutableDictionary *gifPropsDict = [[NSMutableDictionary alloc] init];
[gifPropsDict setObject:(NSString *)kCGImagePropertyColorModelRGB forKey:(NSString *)kCGImagePropertyColorModel];
[gifPropsDict setObject:[NSNumber numberWithBool:YES] forKey:(NSString *)kCGImagePropertyGIFHasGlobalColorMap];
// set gif loop
NSDictionary *gifProperties = [NSDictionary
dictionaryWithObject:gifPropsDict
forKey:(NSString *) kCGImagePropertyGIFDictionary];
// loop through frames and add them to GIF
for (int i=0; i < [images count]; i++) {
NSData *imageData = [images objectAtIndex:i];
CGImageRef imageRef = [self pngRepDataToCgImageRef:imageData];
CGImageDestinationAddImage(destination, imageRef, (__bridge CFDictionaryRef) (frameProperties));
}
// save the GIF
CGImageDestinationSetProperties(destination, (__bridge CFDictionaryRef)(gifProperties));
CGImageDestinationFinalize(destination);
CFRelease(destination);
I've checked the ImageBitmapReps, when saved as PNG individually, they are just fine.
As I understood, the color tables should be handled by CG or am I responsible to produce the dithered colors? How to do that?
Even when doing the same animation repeatedly, the GIFs produced may vary.
This is a single BitmapRep
(source: andraz.eu)
And this is the GIF with the invalid colors ("combine" mode)
(source: andraz.eu)
I read your code. Please double check the "allImages" while you are creating the CGImageDestinationRef, and the "[images count]".
the follow test code works fine:
NSDictionary *prep = [NSDictionary dictionaryWithObject:[NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:0.2f] forKey:(NSString *) kCGImagePropertyGIFDelayTime] forKey:(NSString *) kCGImagePropertyGIFDictionary];
CGImageDestinationRef dst = CGImageDestinationCreateWithURL((__bridge CFURLRef)(fileURL), kUTTypeGIF, [filesArray count], nil);
for (int i=0;i<[filesArray count];i++)
{
//load anImage from array
...
CGImageRef imageRef=[anImage CGImageForProposedRect:nil context:nil hints:nil];
CGImageDestinationAddImage(dst, imageRef,(__bridge CFDictionaryRef)(prep));
}
bool fileSave = CGImageDestinationFinalize(dst);
CFRelease(dst);
I'm trying to play a video (MP4/H.263) on iOS, but getting really fuzzy results.
Here's the code to initialize the asset reading:
mTextureHandle = [self createTexture:CGSizeMake(400,400)];
NSURL * url = [NSURL fileURLWithPath:file];
mAsset = [[AVURLAsset alloc] initWithURL:url options:NULL];
NSArray * tracks = [mAsset tracksWithMediaType:AVMediaTypeVideo];
mTrack = [tracks objectAtIndex:0];
NSLog(#"Tracks: %i", [tracks count]);
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary * settings = [[NSDictionary alloc] initWithObjectsAndKeys:value, key, nil];
mOutput = [[AVAssetReaderTrackOutput alloc]
initWithTrack:mTrack outputSettings:settings];
mReader = [[AVAssetReader alloc] initWithAsset:mAsset error:nil];
[mReader addOutput:mOutput];
So much for the reader init, now the actual texturing:
CMSampleBufferRef sampleBuffer = [mOutput copyNextSampleBuffer];
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
glBindTexture(GL_TEXTURE_2D, mTextureHandle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 600, 400, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress( pixelBuffer ));
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CFRelease(sampleBuffer);
Everything works well ... except the rendered image looks like this; sliced and skewed?
I've even tried looking into AVAssetTrack's preferred transformation matrix, to no avail, since it always returns CGAffineTransformIdentity.
Side-note: If I switch the source to camera, the image gets rendered fine. Am I missing some decompression step? Shouldn't that be handled by the asset reader?
Thanks!
Code: https://github.com/shaded-enmity/objcpp-opengl-video
I think the CMSampleBuffer uses a padding for performance reason, so you need to have the right width for the texture.
Try to set width of the texture with : CVPixelBufferGetBytesPerRow(pixelBuffer) / 4
(if your video format uses 4 bytes per pixel, change if other)
I want to create the thumbnail using the CG. It creates the thumbnails.
Here i want to have the thumbnail with the size 1024 (with aspect ratio.) Is it possible to get the desired size thumbnail directly from the CG?
In the options dictionary i can pass the max size of the thumnail can be created, but is there any way to have min size for the same..?
NSURL * url = [NSURL fileURLWithPath:inPath];
CGImageSourceRef source = CGImageSourceCreateWithURL((CFURLRef)url, NULL);
CGImageRef image=nil;
if (source)
{
NSDictionary* thumbOpts = [NSDictionary dictionaryWithObjectsAndKeys:
(id) kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailWithTransform,
(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailFromImageIfAbsent,
[NSNumber numberWithInt:2048], kCGImageSourceThumbnailMaxPixelSize,
nil];
image = CGImageSourceCreateThumbnailAtIndex(source, 0, (CFDictionaryRef)thumbOpts);
NSLog(#"image width = %d %d", CGImageGetWidth(image), CGImageGetHeight(image));
CFRelease(source);
}
If you want a thumbnail with size 1024 (maximum dimension), you should be passing 1024, not 2048. Also, if you want to make sure the thumbnail is created to your specifications, you should be asking for kCGImageSourceCreateThumbnailFromImageAlways, not kCGImageSourceCreateThumbnailFromImageIfAbsent, since the latter might cause an existing thumbnail to be used, and it could be smaller than you want.
So, here's code that does what you ask:
NSURL* url = // whatever;
NSDictionary* d = [NSDictionary dictionaryWithObjectsAndKeys:
(id)kCFBooleanTrue, kCGImageSourceShouldAllowFloat,
(id)kCFBooleanTrue, kCGImageSourceCreateThumbnailWithTransform,
(id)kCFBooleanTrue, kCGImageSourceCreateThumbnailFromImageAlways,
[NSNumber numberWithInt:1024], kCGImageSourceThumbnailMaxPixelSize,
nil];
CGImageSourceRef src = CGImageSourceCreateWithURL((CFURLRef)url, NULL);
CGImageRef imref = CGImageSourceCreateThumbnailAtIndex(src, 0, (CFDictionaryRef)d);
// memory management omitted
Swift 3 version of the answer:
func loadImage(at url: URL, maxDimension max: Int) -> UIImage? {
guard let imageSource = CGImageSourceCreateWithURL(url as CFURL, nil)
else {
return nil
}
let options: [CFString: Any] = [
kCGImageSourceShouldAllowFloat: true,
kCGImageSourceCreateThumbnailWithTransform: true,
kCGImageSourceCreateThumbnailFromImageAlways: true,
kCGImageSourceThumbnailMaxPixelSize: max
]
guard let thumbnail = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, options as CFDictionary)
else {
return nil
}
return UIImage(cgImage: thumbnail)
}