I'm trying to create a custom patch for Quartz Composer that captures from a FireWire input. I can open the device and add the input successfully. I even disable the audio inputs as per the QTKit documentation. I suspect my problem is either adding the output (although it adds without error). Every time the frame runs it crashes when it appears the imageBuffer is empty. It has no dimensions or anything. I'm using the standard captureOutput:didOutputVideoFrame:withSampleBuffer:fromConnection from the documentation.
[mCaptureDecompressedVideoOutput release];
mCaptureDecompressedVideoOutput = [[QTCaptureDecompressedVideoOutput alloc] init];
NSLog(#"allocated mCaptureDecompressedVideoOutput");
[mCaptureDecompressedVideoOutput setPixelBufferAttributes:
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferOpenGLCompatibilityKey,
[NSNumber numberWithLong:k32ARGBPixelFormat], kCVPixelBufferPixelFormatTypeKey, nil]];
[mCaptureDecompressedVideoOutput setDelegate:self];
success = [mCaptureSession addOutput:mCaptureDecompressedVideoOutput error:&error];
if (!success) {
NSLog(#"Failed to add output");
self.outputImage = nil;
if (mCaptureSession) {
[mCaptureSession release];
mCaptureSession= nil;
}
if (mCaptureDeviceInput) {
[mCaptureDeviceInput release];
mCaptureDeviceInput= nil;
}
if (mCaptureDecompressedVideoOutput) {
[mCaptureDecompressedVideoOutput release];
mCaptureDecompressedVideoOutput= nil;
}
return YES;
}
[mCaptureSession startRunning];
_currentDevice= self.inputDevice;
}
CVImageBufferRef imageBuffer = CVBufferRetain(mCurrentImageBuffer);
if (imageBuffer) {
CVPixelBufferLockBaseAddress(imageBuffer, 0);
NSLog(#"ColorSpace: %#", CVImageBufferGetColorSpace(imageBuffer));
id provider= [context outputImageProviderFromBufferWithPixelFormat:QCPlugInPixelFormatARGB8
pixelsWide:CVPixelBufferGetWidth(imageBuffer)
pixelsHigh:CVPixelBufferGetHeight(imageBuffer)
baseAddress:CVPixelBufferGetBaseAddress(imageBuffer)
bytesPerRow:CVPixelBufferGetBytesPerRow(imageBuffer)
releaseCallback:_BufferReleaseCallback
releaseContext:imageBuffer
colorSpace:CVImageBufferGetColorSpace(imageBuffer)
shouldColorMatch:YES];
What could I be doing wrong? This code works great for video only inputs, but not for FireWire (muxed) inputs.
I managed to get this patch to work with both Video and Muxed inputs by removing the kCVPixelBufferOpenGLCompatibilityKey from mCaptureDecompressedVideoOutput. While that allows the patch to work perfectly inside Quartz Composer, my intent is to run this patch in a composition that is used inside CamTwist, which appears not to need OpenGL support. Right now, it just displays a black screen with wither Video or Muxed inputs, where it was working with Video inputs before. So, I'm going to convert my CVImageBufferRef to an OpenGL texture and see if I can get that to work with
outputImageProviderFromTextureWithPixelFormat:pixelsWide:pixelsHigh:name:flipped:releaseCallback:releaseContext:colorSpace:shouldColorMatch
Related
I've got an app that uses Metal to do some rendering to screen (with a CAMetalLayer, not a MTKView, by necessity), and I'd like to provide the user with the option of saving a snapshot of the result to disk. Attempting to follow the answer at https://stackoverflow.com/a/47632198/2752221 while translating to Objective-C, I first wrote a commandBuffer completion callback like so (note this is manual retain/release code, not ARC; sorry, legacy code):
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> buffer) {
[self performSelectorOnMainThread:#selector(saveImageTakingDrawable:) withObject:[drawable retain] waitUntilDone:NO];
optionKeyPressedAndUnhandled_ = NO;
}];
I do this immediately after calling [commandBuffer presentDrawable:drawable]; my id <CAMetalDrawable> drawable is still in scope. Here is my implementation of saveImageTakingDrawable::
- (void)saveImageTakingDrawable:(id <CAMetalDrawable>)drawable
{
// We need to have an image to save
if (!drawable) { NSBeep(); return; }
id<MTLTexture> displayTexture = drawable.texture;
if (!displayTexture) { NSBeep(); return; }
CIImage *ciImage = [CIImage imageWithMTLTexture:displayTexture options:nil];
// release the metal texture as soon as we can, to free up the system resources
[drawable release];
if (!ciImage) { NSBeep(); return; }
NSCIImageRep *rep = [NSCIImageRep imageRepWithCIImage:ciImage];
if (!rep) { NSBeep(); return; }
NSImage *nsImage = [[[NSImage alloc] initWithSize:rep.size] autorelease];
[nsImage addRepresentation:rep];
NSData *tiffData = [nsImage TIFFRepresentation];
if (!tiffData) { NSBeep(); return; }
... filesystem cruft culminating in ...
if ([tiffData writeToFile:filePath options:NSDataWritingWithoutOverwriting error:nil])
{
// play a sound to acknowledge saving
[[NSSound soundNamed:#"Tink"] play];
return;
}
NSBeep();
return;
}
The result is a "Tink" sound and a 7.8 MB .tif file of sensible dimensions (1784x1090), but it's transparent, and there is no usable image data in it; viewing the file in Hex Fiend shows that the whole file is all zeros except fairly brief header and footer sections.
I suspect that the fundamental method is flawed for some reason. I get several console logs when I attempt this snapshot:
2020-06-04 18:20:40.203669-0400 MetalTest[37773:1065740] [CAMetalLayerDrawable texture] should not be called after already presenting this drawable. Get a nextDrawable instead.
Input Metal texture was created with a device that does not match the current context device.
Input Metal texture was created with a device that does not match the current context device.
2020-06-04 18:20:40.247637-0400 MetalTest[37773:1065740] [plugin] AddInstanceForFactory: No factory registered for id <CFUUID 0x600000297260> F8BB1C28-BAE8-11D6-9C31-00039315CD46
2020-06-04 18:20:40.281161-0400 MetalTest[37773:1065740] HALC_ShellDriverPlugIn::Open: Can't get a pointer to the Open routine
That first log seems to suggest that I'm really not even allowed to get the texture out of the drawable after it has been presented in the first place. So... what's the right way to do this?
UPDATE:
Note that I am not wedded to the later parts of saveImageTakingDrawable:'s code. I would be happy to write out a PNG instead of a TIFF, and if there's a way to get where I'm going without using CIImage, NSCIImageRep, or NSImage, so much the better. I just want to save the drawable's texture image out as a PNG or TIFF, somehow.
I just want to save the drawable's texture image out as a PNG or TIFF, somehow.
Here is an alternative approach which you may test; will need to set the path to wherever you want the image file saved.
- (void) windowCapture: (id)sender {
NSTask *task = [[NSTask alloc]init];
[task setLaunchPath:#"/bin/sh"];
NSArray *args = [NSArray arrayWithObjects: #"-c", #"screencapture -i -c -Jwindow", nil];
[task setArguments:args];
NSPipe *pipe = [NSPipe pipe];
[task setStandardOutput:pipe];
[task launch];
[task waitUntilExit];
int status = [task terminationStatus];
NSData *dataRead = [[pipe fileHandleForReading] readDataToEndOfFile];
NSString *pipeOutput = [[[NSString alloc] initWithData:dataRead encoding:NSUTF8StringEncoding]autorelease];
// Tell us if there was a problem
if (!(status == 0)){NSLog(#"Error: %#",pipeOutput);}
[task release];
// Get image data from pasteboard and write to file
NSPasteboard *pboard = [NSPasteboard generalPasteboard];
NSData *pngData = [pboard dataForType:NSPasteboardTypePNG];
NSError *err;
BOOL success = [pngData writeToFile:#"/Users/xxxx/Desktop/ABCD.png" options:NSDataWritingAtomic error:&err];
if(!success){NSLog(#"Unable to write to file: %#",err);} else {NSLog(#"File written to desktop.");}
}
I think you should inject a blit into the command buffer (before submitting it) to copy the texture to a texture of your own. (The drawable's texture is not safe to use after it has been presented, as you've found.)
One strategy is to set the layer's framebufferOnly to false for the pass, and then use a MTLBlitCommandEncoder to encode a copy from the drawable texture to a texture of your own. Or, if the pixel formats are different, encode a draw of a quad from the drawable texture to your own, using a render command encoder.
The other strategy is to substitute your own texture as the render target color attachment where your code is currently using the drawable's texture. Render to your texture primarily and then draw that to the drawable's texture.
Either way, your texture's storageMode can be MTLStorageModeManaged, so you can access its data with one of the -getBytes:... methods. You just have to make sure that the command buffer encodes a synchronizeResource: command of a blit encoder at the end.
You can then use the bytes to construct an NSBitmapImageRep using the -initWithBitmapDataPlanes:pixelsWide:pixelsHigh:bitsPerSample:samplesPerPixel:hasAlpha:isPlanar:colorSpaceName:bitmapFormat:bytesPerRow:bitsPerPixel: method. Then, get PNG data from it using -representationUsingType:properties: and save that to file.
I am using the iPhone/iPad camera to get a video stream and doing recognition on the stream, but with lighting changes it has a negative impact on the robustness. I have tested different settings in different light and can get it to work, but trying to get the settings to adjust at run time is what I need.
I can calculate a simple brightness check on each frame, but the camera adjusts and throws my results off. I can watch for sharp changes and run checks then, but gradual changes would throw my results off as well.
Ideally I'd be like to access the camera/EXIF data for the stream and see what it is registering the unfiltered brightness as, is there a way to do this?
(I am working for devices iOS 5 and above)
Thank you
Available in iOS 4.0 and above. It's possible to get EXIF information from CMSampleBufferRef.
//Import ImageIO & include framework in your project.
#import <ImageIO/CGImageProperties.h>
In your sample buffer delegate toll-free bridging will get a NSDictionary of results from CoreMedia's CMGetAttachment.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
NSDictionary* dict = (NSDictionary*)CMGetAttachment(sampleBuffer, kCGImagePropertyExifDictionary, NULL);
Complete code, as used in my own app:
- (void)setupAVCapture {
//-- Setup Capture Session.
_session = [[AVCaptureSession alloc] init];
[_session beginConfiguration];
//-- Set preset session size.
[_session setSessionPreset:AVCaptureSessionPreset1920x1080];
//-- Creata a video device and input from that Device. Add the input to the capture session.
AVCaptureDevice * videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if(videoDevice == nil)
assert(0);
//-- Add the device to the session.
NSError *error;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if(error)
assert(0);
[_session addInput:input];
//-- Create the output for the capture session.
AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES]; // Probably want to set this to NO when recording
//-- Set to YUV420.
[dataOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; // Necessary for manual preview
// Set dispatch to be on the main thread so OpenGL can do things with the data
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[_session addOutput:dataOutput];
[_session commitConfiguration];
[_session startRunning];
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CFDictionaryRef metadataDict = CMCopyDictionaryOfAttachments(NULL,
sampleBuffer, kCMAttachmentMode_ShouldPropagate);
NSDictionary *metadata = [[NSMutableDictionary alloc]
initWithDictionary:(__bridge NSDictionary*)metadataDict];
CFRelease(metadataDict);
NSDictionary *exifMetadata = [[metadata
objectForKey:(NSString *)kCGImagePropertyExifDictionary] mutableCopy];
self.autoBrightness = [[exifMetadata
objectForKey:(NSString *)kCGImagePropertyExifBrightnessValue] floatValue];
float oldMin = -4.639957; // dark
float oldMax = 4.639957; // light
if (self.autoBrightness > oldMax) oldMax = self.autoBrightness; // adjust oldMax if brighter than expected oldMax
self.lumaThreshold = ((self.autoBrightness - oldMin) * ((3.0 - 1.0) / (oldMax - oldMin))) + 1.0;
NSLog(#"brightnessValue %f", self.autoBrightness);
NSLog(#"lumaThreshold %f", self.lumaThreshold);
}
The lumaThreshold variable is sent as a uniform variable to my fragment shader, which multiplies the Y sampler texture to find the ideal luminosity based on the brightness of the environment. Right now, it uses the back camera; I'll probably switch to the front camera, since I'm only changing the "brightness" of the screen to adjust for indoor/outdoor viewing, and the user's eyes are on the front of the camera (and not the back).
I created a 'mirror'-like view in my app that uses the front camera to show a 'mirror' to the user. The problem I'm having is that I have not touched this code in weeks (and it did work then) but now I'm testing it again and it's not working. The code is the same as before, there are no errors coming up, and the view in the storyboard is exactly the same as before. I have no idea what is going on, so I was hoping that this website would help.
Here is my code:
if([UIImagePickerController isCameraDeviceAvailable:UIImagePickerControllerCameraDeviceFront]) {
//If the front camera is available, show the camera
AVCaptureSession *session = [[AVCaptureSession alloc] init];
AVCaptureOutput *output = [[AVCaptureStillImageOutput alloc] init];
[session addOutput:output];
//Setup camera input
NSArray *possibleDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
//You could check for front or back camera here, but for simplicity just grab the first device
AVCaptureDevice *device = [possibleDevices objectAtIndex:1];
NSError *error = nil;
// create an input and add it to the session
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; //Handle errors
//set the session preset
session.sessionPreset = AVCaptureSessionPresetHigh; //Or other preset supported by the input device
[session addInput:input];
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
//Now you can add this layer to a view of your view controller
[cameraView.layer addSublayer:previewLayer];
previewLayer.frame = self.cameraView.bounds;
[session startRunning];
if ([session isRunning]) {
NSLog(#"The session is running");
}
if ([session isInterrupted]) {
NSLog(#"The session has been interupted");
}
} else {
//Tell the user they don't have a front facing camera
}
Thank You in advanced.
Not sure if this is the problem but there is an inconsistency between your code and the comments. The inconsistency is with the following line of code:
AVCaptureDevice *device = [possibleDevices objectAtIndex:1];
In the comment above it says: "...for simplicity just grab the first device". However, the code is grabbing the second device, NSArray is indexed from 0. I believe the comment should be corrected as I think you are assuming the front camera will be the second device in the array.
If you are working on the assumption that the first device is the back camera and the second device is the front camera then this is a dangerous assumption. It would be much safer and more future proof to check the list of possibleDevices for the device that is the front camera.
The following code will enumerate the list of possibleDevices and create input using the front camera.
// Find the front camera and create an input and add it to the session
AVCaptureDeviceInput* input = nil;
for(AVCaptureDevice *device in possibleDevices) {
if ([device position] == AVCaptureDevicePositionFront) {
NSError *error = nil;
input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error]; //Handle errors
break;
}
}
Update: I have just cut and pasted the code exactly as it is in the question into a simple project and it is working fine for me. I am seeing the video from the front camera. You should probably look elsewhere for the issue. First, I would be inclined to check the cameraView and associated layers.
I've been able to use AVFoundation's AVAssetReader class to upload video frames into an OpenGL ES texture. It has a caveat, however, in that it fails when used with an AVURLAsset that points to remote media. This failure isn't well documented, and I'm wondering if there's any way around the shortcoming.
There's some API that was released with iOS 6 that I've been able to use to make the process a breeze. It doesn't use AVAssetReader at all, and instead relies on a class called AVPlayerItemVideoOutput. An instance of this class can be added to any AVPlayerItem instance via a new -addOutput: method.
Unlike the AVAssetReader, this class will work fine for AVPlayerItems that are backed by a remote AVURLAsset, and also has the benefit of allowing for a more sophisticated playback interface that supports non-linear playback via -copyPixelBufferForItemTime:itemTimeForDisplay: (instead of of AVAssetReader's severely limiting -copyNextSampleBuffer method.
SAMPLE CODE
// Initialize the AVFoundation state
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:someUrl options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:#"tracks"] completionHandler:^{
NSError* error = nil;
AVKeyValueStatus status = [asset statusOfValueForKey:#"tracks" error:&error];
if (status == AVKeyValueStatusLoaded)
{
NSDictionary* settings = #{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] };
AVPlayerItemVideoOutput* output = [[[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:settings] autorelease];
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:asset];
[playerItem addOutput:[self playerItemOutput]];
AVPlayer* player = [AVPlayer playerWithPlayerItem:playerItem];
// Assume some instance variable exist here. You'll need them to control the
// playback of the video (via the AVPlayer), and to copy sample buffers (via the AVPlayerItemVideoOutput).
[self setPlayer:player];
[self setPlayerItem:playerItem];
[self setOutput:output];
}
else
{
NSLog(#"%# Failed to load the tracks.", self);
}
}];
// Now at any later point in time, you can get a pixel buffer
// that corresponds to the current AVPlayer state like this:
CVPixelBufferRef buffer = [[self output] copyPixelBufferForItemTime:[[self playerItem] currentTime] itemTimeForDisplay:nil];
Once you've got your buffer, you can upload it to OpenGL however you want. I recommend the horribly documented CVOpenGLESTextureCacheCreateTextureFromImage() function, because you'll get hardware acceleration on all the newer devices, which is much faster than glTexSubImage2D(). See Apple's GLCameraRipple and RosyWriter demos for examples.
As of 2023, CGLTexImageIOSurface2D is much faster than CVOpenGLESTextureCacheCreateTextureFromImage() for getting CVPixelData into an OpenGL texture.
Ensure the CVPixelBuffers are IOSurface backed and in the right format:
videoOutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:#{
(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA),
(id)kCVPixelBufferIOSurfacePropertiesKey: #{},
}];
videoOutput.suppressesPlayerRendering = YES;
Glint texture;
glGenTextures(1, & texture);
Then to get each frame
pixelBuffer = [output copyPixelBufferForItemTime:currentTime itemTimeForDisplay:NULL];
if (NULL == pixelBuffer) return;
let surface = CVPixelBufferGetIOSurface(pixelBuffer);
if (!surface) return;
glBindTexture(GL_TEXTURE_RECTANGLE, texture);
CGLError error = CGLTexImageIOSurface2D(CGLGetCurrentContext(),
GL_TEXTURE_RECTANGLE,
GL_RGBA,
(int)IOSurfaceGetWidthOfPlane(surface, 0),
(int)IOSurfaceGetHeightOfPlane(surface, 0),
GL_BGRA,
GL_UNSIGNED_INT_8_8_8_8_REV,
surface,
0);
more info
in my program I want the user to be able to:
record his voice,
pause the recording process,
listen to what he recorded
and then continue recording.
I have managed to get to the point where I can record and play the recordings with AVAudioRecorder and AVAudioPlayer. But whenever I try to record, pause recording and then play, the playing part fails with no error.
I can guess that the reason it's not playing is because the audio file hasn't been saved yet and is still in memory or something.
Is there a way I can play paused recordings?
If there is please tell me how
I'm using xcode 4.3.2
If you want to play the recording, then yes you have to stop recording before you can load the file into the AVAudioPlayer instance.
If you want to be able to playback some of the recording, then add more to the recording after listening to it, or say record in the middle.. then you're in for some trouble.
You have to create a new audio file and then combine them together.
This was my solution:
// Generate a composition of the two audio assets that will be combined into
// a single track
AVMutableComposition* composition = [AVMutableComposition composition];
AVMutableCompositionTrack* audioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio
preferredTrackID:kCMPersistentTrackID_Invalid];
// grab the two audio assets as AVURLAssets according to the file paths
AVURLAsset* masterAsset = [[AVURLAsset alloc] initWithURL:[NSURL fileURLWithPath:self.masterFile] options:nil];
AVURLAsset* activeAsset = [[AVURLAsset alloc] initWithURL:[NSURL fileURLWithPath:self.newRecording] options:nil];
NSError* error = nil;
// grab the portion of interest from the master asset
[audioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, masterAsset.duration)
ofTrack:[[masterAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]
atTime:kCMTimeZero
error:&error];
if (error)
{
// report the error
return;
}
// append the entirety of the active recording
[audioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, activeAsset.duration)
ofTrack:[[activeAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]
atTime:masterAsset.duration
error:&error];
if (error)
{
// report the error
return;
}
// now export the two files
// create the export session
// no need for a retain here, the session will be retained by the
// completion handler since it is referenced there
AVAssetExportSession* exportSession = [AVAssetExportSession
exportSessionWithAsset:composition
presetName:AVAssetExportPresetAppleM4A];
if (nil == exportSession)
{
// report the error
return;
}
NSString* combined = #"combined file path";// create a new file for the combined file
// configure export session output with all our parameters
exportSession.outputURL = [NSURL fileURLWithPath:combined]; // output path
exportSession.outputFileType = AVFileTypeAppleM4A; // output file type
[exportSession exportAsynchronouslyWithCompletionHandler:^{
// export status changed, check to see if it's done, errored, waiting, etc
switch (exportSession.status)
{
case AVAssetExportSessionStatusFailed:
break;
case AVAssetExportSessionStatusCompleted:
break;
case AVAssetExportSessionStatusWaiting:
break;
default:
break;
}
NSError* error = nil;
// your code for dealing with the now combined file
}];
I can't take full credit for this work, but it was pieced together from the input of a couple of others:
AVAudioRecorder / AVAudioPlayer - append recording to file
(I can't find the other link at the moment)
We had the same requirements for our app as the OP described, and ran into the same issues (i.e., the recording has to be stopped, instead of paused, if the user wants to listen to what she has recorded up to that point). Our app (project's Github repo) uses AVQueuePlayer for playback and a method similar to kermitology's answer to concatenate the partial recordings, with some notable differences:
implemented in Swift
concatenates multiple recordings into one
no messing with tracks
The rationale behind the last item is that simple recordings with AVAudioRecorder will have one track, and the main reason for this whole workaround is to concatenate those single tracks in the assets (see Addendum 3). So why not use AVMutableComposition's insertTimeRange method instead, that takes an AVAsset instead of an AVAssetTrack?
Relevant parts: (full code)
import UIKit
import AVFoundation
class RecordViewController: UIViewController {
/* App allows volunteers to record newspaper articles for the
blind and print-impaired, hence the name.
*/
var articleChunks = [AVURLAsset]()
func concatChunks() {
let composition = AVMutableComposition()
/* `CMTimeRange` to store total duration and know when to
insert subsequent assets.
*/
var insertAt = CMTimeRange(start: kCMTimeZero, end: kCMTimeZero)
repeat {
let asset = self.articleChunks.removeFirst()
let assetTimeRange =
CMTimeRange(start: kCMTimeZero, end: asset.duration)
do {
try composition.insertTimeRange(assetTimeRange,
of: asset,
at: insertAt.end)
} catch {
NSLog("Unable to compose asset track.")
}
let nextDuration = insertAt.duration + assetTimeRange.duration
insertAt = CMTimeRange(start: kCMTimeZero, duration: nextDuration)
} while self.articleChunks.count != 0
let exportSession =
AVAssetExportSession(
asset: composition,
presetName: AVAssetExportPresetAppleM4A)
exportSession?.outputFileType = AVFileType.m4a
exportSession?.outputURL = /* create URL for output */
// exportSession?.metadata = ...
exportSession?.exportAsynchronously {
switch exportSession?.status {
case .unknown?: break
case .waiting?: break
case .exporting?: break
case .completed?: break
case .failed?: break
case .cancelled?: break
case .none: break
}
}
/* Clean up (delete partial recordings, etc.) */
}
This diagram helped me to get around what expects what and inherited from where. (NSObject is implicitly implied as superclass where there is no inheritance arrow.)
Addendum 1: I had my reservations regarding the switch part instead of using KVO on AVAssetExportSessionStatus, but the docs are clear that exportAsynchronously's callback block "is invoked when writing is complete or in the event of writing failure".
Addendum 2: Just in case if someone has issues with AVQueuePlayer: 'An AVPlayerItem cannot be associated with more than one instance of AVPlayer'
Addendum 3: Unless you are recording in stereo, but mobile devices have one input as far as I know. Also, using fancy audio mixing would also require the use of AVCompositionTrack. A good SO thread: Proper AVAudioRecorder Settings for Recording Voice?
RecordAudioViewController.h
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreAudio/CoreAudioTypes.h>
#interface record_audio_testViewController : UIViewController <AVAudioRecorderDelegate> {
IBOutlet UIButton * btnStart;
IBOutlet UIButton * btnPlay;
IBOutlet UIActivityIndicatorView * actSpinner;
BOOL toggle;
//Variables setup for access in the class:
NSURL * recordedTmpFile;
AVAudioRecorder * recorder;
NSError * error;
}
#property (nonatomic,retain)IBOutlet UIActivityIndicatorView * actSpinner;
#property (nonatomic,retain)IBOutlet UIButton * btnStart;
#property (nonatomic,retain)IBOutlet UIButton * btnPlay;
- (IBAction) start_button_pressed;
- (IBAction) play_button_pressed;
#end
RecordAudioViewController.m
#synthesize actSpinner, btnStart, btnPlay;
- (void)viewDidLoad {
[super viewDidLoad];
//Start the toggle in true mode.
toggle = YES;
btnPlay.hidden = YES;
//Instanciate an instance of the AVAudioSession object.
AVAudioSession * audioSession = [AVAudioSession sharedInstance];
//Setup the audioSession for playback and record.
//We could just use record and then switch it to playback leter, but
//since we are going to do both lets set it up once.
[audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error: &error];
//Activate the session
[audioSession setActive:YES error: &error];
}
- (IBAction) start_button_pressed{
if(toggle)
{
toggle = NO;
[actSpinner startAnimating];
[btnStart setTitle:#"Stop Recording" forState: UIControlStateNormal ];
btnPlay.enabled = toggle;
btnPlay.hidden = !toggle;
//Begin the recording session.
//Error handling removed. Please add to your own code.
//Setup the dictionary object with all the recording settings that this
//Recording sessoin will use
//Its not clear to me which of these are required and which are the bare minimum.
//This is a good resource: http://www.totodotnet.net/tag/avaudiorecorder/
NSMutableDictionary* recordSetting = [[NSMutableDictionary alloc] init];
[recordSetting setValue :[NSNumber numberWithInt:kAudioFormatAppleIMA4] forKey:AVFormatIDKey];
[recordSetting setValue:[NSNumber numberWithFloat:44100.0] forKey:AVSampleRateKey];
[recordSetting setValue:[NSNumber numberWithInt: 2] forKey:AVNumberOfChannelsKey];
//Now that we have our settings we are going to instanciate an instance of our recorder instance.
//Generate a temp file for use by the recording.
//This sample was one I found online and seems to be a good choice for making a tmp file that
//will not overwrite an existing one.
//I know this is a mess of collapsed things into 1 call. I can break it out if need be.
recordedTmpFile = [NSURL fileURLWithPath:[NSTemporaryDirectory() stringByAppendingPathComponent: [NSString stringWithFormat: #"%.0f.%#", [NSDate timeIntervalSinceReferenceDate] * 1000.0, #"caf"]]];
NSLog(#"Using File called: %#",recordedTmpFile);
//Setup the recorder to use this file and record to it.
recorder = [[ AVAudioRecorder alloc] initWithURL:recordedTmpFile settings:recordSetting error:&error];
//Use the recorder to start the recording.
//Im not sure why we set the delegate to self yet.
//Found this in antother example, but Im fuzzy on this still.
[recorder setDelegate:self];
//We call this to start the recording process and initialize
//the subsstems so that when we actually say "record" it starts right away.
[recorder prepareToRecord];
//Start the actual Recording
[recorder record];
//There is an optional method for doing the recording for a limited time see
//[recorder recordForDuration:(NSTimeInterval) 10]
}
else
{
toggle = YES;
[actSpinner stopAnimating];
[btnStart setTitle:#"Start Recording" forState:UIControlStateNormal ];
btnPlay.enabled = toggle;
btnPlay.hidden = !toggle;
NSLog(#"Using File called: %#",recordedTmpFile);
//Stop the recorder.
[recorder stop];
}
}
- (void)didReceiveMemoryWarning {
// Releases the view if it doesn't have a superview.
[super didReceiveMemoryWarning];
// Release any cached data, images, etc that aren't in use.
}
-(IBAction) play_button_pressed{
//The play button was pressed...
//Setup the AVAudioPlayer to play the file that we just recorded.
AVAudioPlayer * avPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:recordedTmpFile error:&error];
[avPlayer prepareToPlay];
[avPlayer play];
}
- (void)viewDidUnload {
// Release any retained subviews of the main view.
// e.g. self.myOutlet = nil;
//Clean up the temp file.
NSFileManager * fm = [NSFileManager defaultManager];
[fm removeItemAtPath:[recordedTmpFile path] error:&error];
//Call the dealloc on the remaining objects.
[recorder dealloc];
recorder = nil;
recordedTmpFile = nil;
}
- (void)dealloc {
[super dealloc];
}
#end
RecordAudioViewController.xib
take 2 Buttons. 1 for begin recording and another for Play recording