Video capture with 1:1 aspect ratio in iOS - objective-c

I want to capture videos with an iOS camera with 1:1 aspect ratio.
I tried with UIImagePickerController, but it doesn't provide changing aspect ratio.
Could anyone give me ideas?
Additionally, the iPhone app "Viddy" provides 1:1 aspect ratio video capturing:

AVCaptureVideoPreviewLayer *_preview = [AVVideoCaptureVideoPreviewLayer layerWithSession:_session];
_preview.frame = CGRectMake(0,0,320,320);
_preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:320], AVVideoWidthKey,
[NSNumber numberWithInt:320], AVVideoHeightKey,
AVVideoScalingModeResizeAspectFill,AVVideoScalingModeKey,
nil];
self.videoInput = [AVAssetWriterInput assetWriterInputWithMediaType: AVMediaTypeVideo
outputSettings: videoSettings];
self.videoInput.transform = CGAffineTransformMakeRotation(M_PI);
if([_writer canAddInput:_videoInput]) // AVAssetWriter *_writer
[_writer addInput:_videoInput];
Note:
_preview's videoGravity and the videoSettings AVVideoScalingModeKey should be same to get the output as 320 x 320.

GPUImageMovie* movieFile = [[GPUImageMovie alloc] initWithAsset:asset];
GPUImageCropFilter *cropFilter = [[GPUImageCropFilter alloc] initWithCropRegion:CGRectMake(0.0, 0.1, 1.0, 0.8)];
[movieFile addTarget:cropFilter];
GPUImageMovieWriter* movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(320.0, 320.0)];
[cropFilter addTarget:movieWriter];
[movieWriter startRecording];
[movieFile startProcessing];
[movieWriter finishRecordingWithCompletionHandler:^{
NSLog(#"Completed Successfully");
[cropFilter removeTarget:movieWriter];
[movieFile removeTarget:cropFilter];
}];
where
asset is the input movie file.
cropRegion is the area to crop.
movieUrl is the target url to save the cropped movie.

I don't think it's possible to do so without help of some app, or even if it's possible with an app, you can capture video then crop it to 1:1

Related

AVAssetWriterInput Settings Quality vs. Filesize

Here's a rather specific question that's left me stumped. I'm writing a video recording software that captures video data from a webcam to an MP4. My software is going to replace a script that's already in place that triggers QuickTime Player to do the same but output to a MOV.
I'm using AVFoundation and have the capturing and saving in place but, after repeated tweaks and tests, I've found that the script that's already in place consistently creates MOV files with a higher video quality and lower file sizes than my software.
Here is a link to two samples, a MOV created by the on-site script and an MP4 created by my software: https://www.dropbox.com/sh/1qnn5afmrquwfcr/AADQvDMWkbYJwVNlio9_vbeNa?dl=0
The MOV was created by a colleague and is the quality and file size I'm trying to match with my software. The MP4 was recording in my office, obviously a different lighting situation, but with the same camera as is used on-site.
Comparing the two videos, I can see that they have the same dimensions, duration, and video codec but differ in both file size and quality.
Here is the code where I set up my AVAssetWriter and AVAssetWriter input:
NSDictionary *settings = nil;
settings = [NSDictionary dictionaryWithObjectsAndKeys:
// Specify H264 (MPEG4) as the codec
AVVideoCodecH264, AVVideoCodecKey,
// Specify the video width and height
[NSNumber numberWithInt:self.recordSize.width], AVVideoWidthKey,
[NSNumber numberWithInt:self.recordSize.height], AVVideoHeightKey,
// Specify the video compression
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInteger:2500000], AVVideoAverageBitRateKey,
[NSNumber numberWithInt:1], AVVideoMaxKeyFrameIntervalKey,
//AVVideoProfileLevelH264Baseline30, AVVideoProfileLevelKey, Not available on 10.7
nil], AVVideoCompressionPropertiesKey,
// Specify the HD output color
[NSDictionary dictionaryWithObjectsAndKeys:
AVVideoColorPrimaries_ITU_R_709_2, AVVideoColorPrimariesKey,
AVVideoTransferFunction_ITU_R_709_2, AVVideoTransferFunctionKey,
AVVideoYCbCrMatrix_ITU_R_709_2, AVVideoYCbCrMatrixKey, nil], AVVideoColorPropertiesKey,
nil];
self.assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:settings];// sourceFormatHint:self.formatHint];
/*self.pixelBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc]
initWithAssetWriterInput:self.assetWriterInput
sourcePixelBufferAttributes:[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange],//kCVPixelFormatType_32BGRA],
kCVPixelBufferPixelFormatTypeKey,
nil]];*/
NSError *error;
self.videoFile = [NSURL fileURLWithPath:file];
//[self.movieFileOutput startRecordingToOutputFileURL:self.videoFile recordingDelegate:self];
self.assetWriter = [[AVAssetWriter alloc]
initWithURL:self.videoFile
fileType:AVFileTypeMPEG4//AVFileTypeQuickTimeMovie//
error:&error];
if (self.assetWriter){
[self.assetWriter addInput:self.assetWriterInput];
self.assetWriterInput.expectsMediaDataInRealTime = YES;
And the code where I set up my AVCaptureVideoDataOutput and add it to my capture session:
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
output.alwaysDiscardsLateVideoFrames = NO;
output.videoSettings = nil;
self.videoQueue = dispatch_queue_create("ca.blackboxsoftware.avcapturequeue", NULL);
[output setSampleBufferDelegate:self queue:self.videoQueue];
[self.session addOutput:output];
This quality issue is the big stumbling block of my software and I desperately need your help with it. I'll be happy to post any other code you might need to see and test out changes you feel would make the difference.
Thank you for your time.

How do I set the pixels per inch for an exported JPEG image in a Cocoa app?

I have a Cocoa Mac image editing app which lets users export JPEG images. I'm currently using the following code to export these images as JPEG files:
//this is user specified
NSInteger resolution;
NSImage* savedImage = [[NSImage alloc] initWithSize:NSMakeSize(600, 600)];
[savedImage lockFocus];
//draw here
[savedImage unlockFocus];
NSBitmapImageRep* savedImageBitmapRep = [NSBitmapImageRep imageRepWithData:[savedImage TIFFRepresentationUsingCompression:NSTIFFCompressionNone factor:1.0]];
NSDictionary* properties = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithFloat:1.0], NSImageCompressionFactor, nil];
//holds the jpeg file
NSData * imageData = nil;
imageData = [savedImageBitmapRep representationUsingType:NSJPEGFileType properties:properties];
However, I would like for the user to be able to provide the pixels per inch for this JPEG image (like you can in Photoshop's export options). What would I need to modify in the above code to adjust this value for the exported JPEG?
I couldn't find a way to do it with the NSImage APIs but CGImage can by setting kCGImagePropertyDPIHeight/Width.
I also set kCGImageDestinationLossyCompressionQuality which I think is the same as NSImageCompressionFactor.
//this is user specified
NSInteger resolution = 100;
NSImage* savedImage = [[NSImage alloc] initWithSize:NSMakeSize(600, 600)];
[savedImage lockFocus];
//draw here
[savedImage unlockFocus];
NSBitmapImageRep* savedImageBitmapRep = [NSBitmapImageRep imageRepWithData:[savedImage TIFFRepresentationUsingCompression:NSTIFFCompressionNone factor:1.0]];
NSDictionary* properties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:1.0], kCGImageDestinationLossyCompressionQuality,
[NSNumber numberWithInteger:resolution], kCGImagePropertyDPIHeight,
[NSNumber numberWithInteger:resolution], kCGImagePropertyDPIWidth,
nil];
NSMutableData* imageData = [NSMutableData data];
CGImageDestinationRef imageDest = CGImageDestinationCreateWithData((CFMutableDataRef) imageData, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(imageDest, [savedImageBitmapRep CGImage], (CFDictionaryRef) properties);
CGImageDestinationFinalize(imageDest);
// Do something with imageData
if (![imageData writeToFile:[#"~/Desktop/test.jpg" stringByExpandingTildeInPath] atomically:NO])
NSLog(#"Failed to write imageData");
For NSImage or NSImageRep you do not set the resolution directly but set the size instead.
For size, numberOfPixels and resolution the following equation holds:
size = numberOfPixels * 72.0 / resolution
size is a length and is expressed in dots with the unit inch/72.
(size and resolution are floats). You can see that for an image with dpi=72 size and numberOfPixels are numerally the same (but the meaning is very different).
After creating an NSBitmapImageRep the size with the desired resolution can be set:
NSBitmapImageRep* savedImageBitmapRep = . . . ; // create the new rep
NSSize newSize;
newSize.width = [savedImageBitmapRep pixelsWide] * 72.0 / resolution; // x-resolution
newSize.height = [savedImageBitmapRep pixelsHigh] * 72.0 / resolution; // y-resolution
[savedImageBitmapRep setSize:newSize];
// save the rep
Two remarks: do you really need the lockFocus / unlockFocus way? The preferred way to build a new NSBitmapImageRep is to use NSGraphicsContext. see : http://www.mail-archive.com/cocoa-dev#lists.apple.com/msg74857.html
And: to use TIFFRepresentation for an NSBitmapImageRep is very time and space consuming. Since 10.6 another
way exists and costs nothing, because lockFocus and unlockFocus create an object of class NSCGImageSnapshotRep which under the hood is a CGImage. (In OS versions before 10.6 it was an NSCachedImageRep.) The following does it:
[anImage lockFocus];
// draw something
[anImage unlockFocus];
// now anImage contains an NSCGImageSnapshotRep
CGImageRef cg = [anImage CGImageForProposedRect:NULL context:nil hints:nil];
NSBitmapImageRep *newRep = [[NSBitmapImageRep alloc] initWithCGImage:cg];
// set the resolution
// here you may NSLog anImage, cg and newRep
// save the newRep
// release the newRep if needed

Embed color profile sRGB in JPEG created from cocoa mac app

I use the following code in my cocoa mac app to create a JPEG. How can I also embed srgb color profile in the created JPEG?
NSImage* savedImage = [[NSImage alloc] initWithSize:NSMakeSize(600, 600)];
[savedImage lockFocus];
//draw here
[savedImage unlockFocus];
NSBitmapImageRep* savedImageBitmapRep = [NSBitmapImageRep imageRepWithData:[savedImage TIFFRepresentationUsingCompression:NSTIFFCompressionNone factor:1.0]];
NSDictionary* properties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:1.0], kCGImageDestinationLossyCompressionQuality,
nil];
NSMutableData* imageData = [NSMutableData data];
CGImageDestinationRef imageDest = CGImageDestinationCreateWithData((CFMutableDataRef) imageData, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(imageDest, [savedImageBitmapRep CGImage], (CFDictionaryRef) properties);
CGImageDestinationFinalize(imageDest);
// Do something with imageData
if (![imageData writeToFile:[#"~/Desktop/test.jpg" stringByExpandingTildeInPath] atomically:NO])
NSLog(#"Failed to write imageData");
No idea how to do this with these high level APIs. I have done it by adding the needed IFDs to the JPEG data myself.

Playing video on iOS using OpenGL-ES

I'm trying to play a video (MP4/H.263) on iOS, but getting really fuzzy results.
Here's the code to initialize the asset reading:
mTextureHandle = [self createTexture:CGSizeMake(400,400)];
NSURL * url = [NSURL fileURLWithPath:file];
mAsset = [[AVURLAsset alloc] initWithURL:url options:NULL];
NSArray * tracks = [mAsset tracksWithMediaType:AVMediaTypeVideo];
mTrack = [tracks objectAtIndex:0];
NSLog(#"Tracks: %i", [tracks count]);
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary * settings = [[NSDictionary alloc] initWithObjectsAndKeys:value, key, nil];
mOutput = [[AVAssetReaderTrackOutput alloc]
initWithTrack:mTrack outputSettings:settings];
mReader = [[AVAssetReader alloc] initWithAsset:mAsset error:nil];
[mReader addOutput:mOutput];
So much for the reader init, now the actual texturing:
CMSampleBufferRef sampleBuffer = [mOutput copyNextSampleBuffer];
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
glBindTexture(GL_TEXTURE_2D, mTextureHandle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 600, 400, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress( pixelBuffer ));
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CFRelease(sampleBuffer);
Everything works well ... except the rendered image looks like this; sliced and skewed?
I've even tried looking into AVAssetTrack's preferred transformation matrix, to no avail, since it always returns CGAffineTransformIdentity.
Side-note: If I switch the source to camera, the image gets rendered fine. Am I missing some decompression step? Shouldn't that be handled by the asset reader?
Thanks!
Code: https://github.com/shaded-enmity/objcpp-opengl-video
I think the CMSampleBuffer uses a padding for performance reason, so you need to have the right width for the texture.
Try to set width of the texture with : CVPixelBufferGetBytesPerRow(pixelBuffer) / 4
(if your video format uses 4 bytes per pixel, change if other)

CoreAnimation, AVFoundation and ability to make Video export

I'm looking for the correct way to export my pictures sequence into a quicktime video.
I know that AV Foundation have the ability to merge or recombine videos and also to add audio track building a single video Asset.
Now ... my goal is a little bit different. I would to create a video from scratch. I have a set of UIImage and I need to render all of them in a single video.
I read all the Apple Documentation about AV Foundation and i found the AVVideoCompositionCoreAnimationTool class that have the ability to take a CoreAnimation and reencode it as a video. I also checked the AVEditDemo project provided by Apple but something seems not working on my project.
Here my steps:
1) I create the CoreAnimation Layer
CALayer *animationLayer = [CALayer layer];
[animationLayer setFrame:CGRectMake(0, 0, 1024, 768)];
CALayer *backgroundLayer = [CALayer layer];
[backgroundLayer setFrame:animationLayer.frame];
[backgroundLayer setBackgroundColor:[UIColor blackColor].CGColor];
CALayer *anImageLayer = [CALayer layer];
[anImageLayer setFrame:animationLayer.frame];
CAKeyframeAnimation *changeImageAnimation = [CAKeyframeAnimation animationWithKeyPath:#"contents"];
[changeImageAnimation setDelegate:self];
changeImageAnimation.duration = [[albumSettings transitionTime] floatValue] * [uiImagesArray count];
changeImageAnimation.repeatCount = 1;
changeImageAnimation.values = [NSArray arrayWithArray:uiImagesArray];
changeImageAnimation.removedOnCompletion = YES;
[anImageLayer addAnimation:changeImageAnimation forKey:nil];
[animationLayer addSublayer:anImageLayer];
2) Than I instantiate the AVComposition
AVMutableComposition *composition = [AVMutableComposition composition];
composition.naturalSize = CGSizeMake(1024, 768);
CALayer *wrapLayer = [CALayer layer];
wrapLayer.frame = CGRectMake(0, 0, 1024, 768);
CALayer *videoLayer = [CALayer layer];
videoLayer.frame = CGRectMake(0, 0, 1024, 768);
[wrapLayer addSublayer:animationLayer];
[wrapLayer addSublayer:videoLayer];
AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
AVMutableVideoCompositionInstruction *videoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
videoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMake([imagesFilePath count] * [[albumSettings transitionTime] intValue] * 25, 25));
AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstruction];
videoCompositionInstruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:wrapLayer];
videoComposition.frameDuration = CMTimeMake(1, 25); // 25 fps
videoComposition.renderSize = CGSizeMake(1024, 768);
videoComposition.instructions = [NSArray arrayWithObject:videoCompositionInstruction];
3) I export the video to document path
AVAssetExportSession *session = [[AVAssetExportSession alloc] initWithAsset:composition presetName:AVAssetExportPresetLowQuality];
session.videoComposition = videoComposition;
NSString *filePath = nil;
filePath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
filePath = [filePath stringByAppendingPathComponent:#"Output.mov"];
session.outputURL = [NSURL fileURLWithPath:filePath];
session.outputFileType = AVFileTypeQuickTimeMovie;
[session exportAsynchronouslyWithCompletionHandler:^
{
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(#"Export Finished: %#", session.error);
if (session.error) {
[[NSFileManager defaultManager] removeItemAtPath:filePath error:NULL];
}
});
}];
At the and of the export a get this error:
Export Finished: Error Domain=AVFoundationErrorDomain Code=-11822 "Cannot Open" UserInfo=0x49a97c0 {NSLocalizedFailureReason=This media format is not supported., NSLocalizedDescription=Cannot Open}
I found it inside documentation: AVErrorInvalidSourceMedia = -11822,
AVErrorInvalidSourceMedia
The operation could not be completed because some source media could not be read.
I'm totally sure that the CoreAnimation build by me is right because I rendered it into a test layer and i could see the animation progress correctly.
Anyone can help me to understand where is my error?
maybe you need a fake movie that contains totally black frame to fill the video layer, and then add a CALayer to manpiulate the images
I found the AVVideoCompositionCoreAnimationTool class that have the ability to take a CoreAnimation and reencode it as a video
My understanding was that this instead was only able to take CoreAnimation and add it to an existing video. I just checked the docs, and the only methods available require a video layer too.
EDIT: yep. Digging in docs and WWDC videos, I think you should be using AVAssetWriter instead, and appending images to the writer. Something like:
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:somePath] fileType:AVFileTypeQuickTimeMovie error:&error];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey, [NSNumber numberWithInt:320], AVVideoWidthKey, [NSNumber numberWithInt:480], AVVideoHeightKey, nil];
AVAssetWriterInput* writerInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings] retain];
[videoWriter addInput:writerInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:CMTimeMakeWithSeconds(0, 30)]
[writerInput appendSampleBuffer:sampleBuffer];
[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:CMTimeMakeWithSeconds(60, 30)];
[videoWriter finishWriting];