AVPlayerItem with hls url can't step forward/backward - objective-c

I have a hls url which plays in my avplayer, but I'm unable to use the [playerItem stepByCount:] method. When I call it, it doesn't do anything.
Also if I call [playerItem canStepBackward]), it always returns false. Is there some other way I have to step forward and backward when I'm playing an hls stream?
My setup is:
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:_URL options:nil];
self.playerItem = [AVPlayerItem playerItemWithAsset:asset];
[self.player replaceCurrentItemWithPlayerItem:self.playerItem];
// later on...
AVPlayerItem *playerItem = self.player.currentItem;
if ([playerItem canStepBackward]) {
if ([self isPlaying]) {
[self.player pause];
}
[playerItem stepByCount:-1];
}
This all works perfectly when I use mp4s, but with hls canStepBackward always returns false, and if I ignore it and just call [playerItem stepByCount:-1]; anyway, it does nothing.

If your player's item doesn't support stepping backward/forward you can simply use
seek method of AVPlayer for this purpose e.g.:
// Step forward for 1 sec
player.seek(to: CMTime(seconds: player.currentTime().seconds + 1, preferredTimescale: CMTimeScale(NSEC_PER_SEC)))

Related

Memory leak from looping SKVideoNode (only on actual device)

I have a memory leak that I cannot diagnose. I have tried multiple approaches to creating a seamlessly looped video - Besides AVPlayerLooper, all of the approaches I've encountered and tried involve creating an observer to watch AVPlayerItemDidPlayToEndTimeNotification and then either seeking to the beginning of the video (in the case of AVPlayer) or inserting the video to be looped into the video queue (in the case of AVQueuePlayer). Both seem to have similar performance, but both also have a consistent memory keep related to the seekToTime method (in the case of AVPlayer) and the insertItem method (in the case of AVQueuePlayer). My end goal is to create a subclass of SKVideoNode that loops by default. Below is my code for the subclass:
#import "SDLoopingVideoNode.h"
#import <AVFoundation/AVFoundation.h>
#interface SDLoopingVideoNode()
#property AVQueuePlayer *avQueuePlayer;
#property AVPlayerLooper *playerLooper;
#end
#implementation SDLoopingVideoNode
-(instancetype)initWithPathToResource:(NSString *)path withFiletype:(NSString *)filetype
{
if(self == [super init])
{
NSString *resourcePath = [[NSBundle mainBundle] pathForResource:path ofType:filetype];
NSURL *videoURL = [NSURL fileURLWithPath:resourcePath];
AVAsset *videoAsset = [AVAsset assetWithURL:videoURL];
AVPlayerItem * videoItem = [AVPlayerItem playerItemWithAsset:videoAsset];
self.avQueuePlayer = [[AVQueuePlayer alloc] initWithItems:#[videoItem]];
NSNotificationCenter *noteCenter = [NSNotificationCenter defaultCenter];
[noteCenter addObserverForName:AVPlayerItemDidPlayToEndTimeNotification
object:nil
queue:nil
usingBlock:^(NSNotification *note) {
AVPlayerItem *video = [[AVPlayerItem alloc] initWithURL:videoURL];
[self.avQueuePlayer insertItem:video afterItem:nil];
NSLog(#"Video changed");
}];
self = (SDLoopingVideoNode*)[[SKVideoNode alloc] initWithAVPlayer: self.avQueuePlayer];
return self;
}
return nil;
}
#end
And here is how the subclass is initialized in didMoveToView:
SDLoopingVideoNode *videoNode = [[SDLoopingVideoNode alloc]initWithPathToResource:#"147406" withFiletype:#"mp4"];
[videoNode setSize:CGSizeMake(self.size.width, self.size.height)];
[videoNode setAnchorPoint:CGPointMake(0.5, 0.5)];
[videoNode setPosition:CGPointMake(0, 0)];
[self addChild:videoNode];
[videoNode play];
Short answer is, you will not be able to get that working with AVPlayer. Believe me, I have tried. Instead, it is possible to do seamless looping by using the H264 hardware to decode and then re-encode each video frame as a keyframe, github link here. I have also built a seamless looping layer that supports a full alpha channel. Performance even for full screen 1x1 video on and iPad or iPad pro is great. Also, no memory leaks with this code.

how to stop the infinite loop and allow to accept the touches during this loop

while (true)
{
endtime = CFAbsoluteTimeGetCurrent();
double difftime = endtime - starttime;
NSLog(#"The tine difference = %f",difftime);
if (difftime >= 10)
{
NSURL *url= [NSURL fileURLWithPath:[NSString stringWithFormat:#"%#/a0.wav" , [[NSBundle mainBundle] resourcePath]]];
NSError *error;
audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:&error];
audioPlayer.numberOfLoops=0;
[audioPlayer play];
starttime = endtime;
}
}
when i call this method my application enters in an infinite loop and doesnot accept any touches again how can solve this please?
It looks like you want it to play a sound every 10 seconds. You would be better using a timer like this...
NSTimer *yourTimer = [NSTimer scheduledTimerWithTimeInterval:10.0
target:self
selector:#selector(repeatedAction)
userInfo:nil
repeats:YES];
Then have your repeating function...
- (void)repeatedAction
{
// do your stuff in here
}
The UI will be responsive all the time and you can have a cancel button with an action like this...
- (void)cancelRepeatedAction
{
[self.yourTimer invalidateTimer];
}
This will stop the time rand the repeated action.
HOWEVER
You are also trying to download a file EVERY time you run your action.
A better approach would be to download the file once and store it.
Even better would be to download the file asynchronously.
This is not how you structure an event-driven application. You are blocking the event processing queue (NSRunLoop) by not returning out of this function. So no touch events will get a chance to be processed until you return from this loop.
You need to play the sounds asynchronously - which AVAudioPlayer can do for you. Read up on the ADC documentation about queueing and playing music in AVFoundation.
For a start, you don't need the while loop at all. Just start the playback, and register for notifications of playback progress. Make the _audioPlayer an ivar of your controller class, as its lifetime persists beyond the method used to load the file and initiate playback:
NSURL *url= [NSURL fileURLWithPath:[NSString stringWithFormat:#"%#/a0.wav" , [[NSBundle mainBundle] resourcePath]]];
NSError *error;
_audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:&error];
_audioPlayer.numberOfLoops=0;
[_audioPlayer play];
Then define a delegate method which implements the AVAudioPlayerDelegate protocol, and provide at least this method:
- (void)audioPlayerDidFinishPlaying:(AVAudioPlayer *)player successfully:(BOOL)flag {
}
You can also provide a touch event handler in your view, which calls a pauseAudio: method in the controller class, which then calls:
[audioPlayer pause];
Have a good read through the docs, and study the sample code:
iOS Developer Library - ADC AVAudioPlayer Class Reference
Have a look at how this app is structured, with a controller registering for notifications and using a delegate for callbacks. It also uses a timer and features GUI updates.
iOS Developer Library - avTouch - media player sample code

AVPlayer plays on simulator but doesn't on a real device

I'm implementing a basic audio player in order to play remote audio files. Files are in format mp3.
The code I wrote is working fine on the simulator but doesn't work on a real device. However the same url I use within my app works fine if I load it by using safari (on the same real device) so I'm not really getting the missing point.
Below is my code:
self.musicPlayer = [AVPlayer playerWithURL:[NSURL URLWithString:urlTrack]];
[self.musicPlayer play];
something extremely easy. The music player property is defined as
#property (nonatomic, retain) AVPlayer *musicPlayer;
I also tried using an AVPlayerItem but the result is the same. Here is the code I have used
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithURL:[NSURL URLWithString:urlTrack]];
self.musicPlayer = [AVPlayer playerWithPlayerItem:playerItem];
[self.musicPlayer play];
Finally I tried to use the code below
self.musicPlayer = [AVPlayer playerWithURL:[NSURL URLWithString:urlTrack]];
NSLog(#"Player created:%d",self.musicPlayer.status);
[self.musicPlayer addObserver:self forKeyPath:#"status" options:0 context:nil];
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context {
NSLog(#"Player created:%d",self.musicPlayer.status);
if (object == self.musicPlayer && [keyPath isEqualToString:#"status"]) {
if (self.musicPlayer.status == AVPlayerStatusReadyToPlay) {
[self.musicPlayer play];
} else if (self.musicPlayer.status == AVPlayerStatusFailed) {
// something went wrong
}
}
}
When the method observeValueForKeyPath is invoked the player status is 1 and the play is exectuted but still not sound.
I tried several files like:
http://www.nimh.nih.gov/audio/neurogenesis.mp3
http://www.robtowns.com/music/blind_willie.mp3
Any idea?
Tnx
Check the spelling of your filename. The device is case-sensitive, the simulator is not...
Also, check if your ringer is off, you won't hear any sound when it's off. To prevent that, use
NSError *_error = nil;
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayback error: &_error];
right before where you init the player
i have tried all the things but did not work, the only think that work for me AVAudioSession to playBack
here is the code i used
private var audioPlayer: AVAudioPlayer!
guard let path = Bundle.main.path(forResource: name, ofType: "mp3") else {
print("can not find path")
return
}
let url = URL(fileURLWithPath: path)
do {
try AVAudioSession.sharedInstance().setCategory(.playback)
audioPlayer = try AVAudioPlayer(contentsOf: url)
} catch {
print("some thing went wrong: \(error)")
}
audioPlayer!.prepareToPlay()
audioPlayer!.play()
please do not ignore the line in the above code
try AVAudioSession.sharedInstance().setCategory(.playback)

Can I use AVFoundation to stream downloaded video frames into an OpenGL ES texture?

I've been able to use AVFoundation's AVAssetReader class to upload video frames into an OpenGL ES texture. It has a caveat, however, in that it fails when used with an AVURLAsset that points to remote media. This failure isn't well documented, and I'm wondering if there's any way around the shortcoming.
There's some API that was released with iOS 6 that I've been able to use to make the process a breeze. It doesn't use AVAssetReader at all, and instead relies on a class called AVPlayerItemVideoOutput. An instance of this class can be added to any AVPlayerItem instance via a new -addOutput: method.
Unlike the AVAssetReader, this class will work fine for AVPlayerItems that are backed by a remote AVURLAsset, and also has the benefit of allowing for a more sophisticated playback interface that supports non-linear playback via -copyPixelBufferForItemTime:itemTimeForDisplay: (instead of of AVAssetReader's severely limiting -copyNextSampleBuffer method.
SAMPLE CODE
// Initialize the AVFoundation state
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:someUrl options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:#"tracks"] completionHandler:^{
NSError* error = nil;
AVKeyValueStatus status = [asset statusOfValueForKey:#"tracks" error:&error];
if (status == AVKeyValueStatusLoaded)
{
NSDictionary* settings = #{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] };
AVPlayerItemVideoOutput* output = [[[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:settings] autorelease];
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:asset];
[playerItem addOutput:[self playerItemOutput]];
AVPlayer* player = [AVPlayer playerWithPlayerItem:playerItem];
// Assume some instance variable exist here. You'll need them to control the
// playback of the video (via the AVPlayer), and to copy sample buffers (via the AVPlayerItemVideoOutput).
[self setPlayer:player];
[self setPlayerItem:playerItem];
[self setOutput:output];
}
else
{
NSLog(#"%# Failed to load the tracks.", self);
}
}];
// Now at any later point in time, you can get a pixel buffer
// that corresponds to the current AVPlayer state like this:
CVPixelBufferRef buffer = [[self output] copyPixelBufferForItemTime:[[self playerItem] currentTime] itemTimeForDisplay:nil];
Once you've got your buffer, you can upload it to OpenGL however you want. I recommend the horribly documented CVOpenGLESTextureCacheCreateTextureFromImage() function, because you'll get hardware acceleration on all the newer devices, which is much faster than glTexSubImage2D(). See Apple's GLCameraRipple and RosyWriter demos for examples.
As of 2023, CGLTexImageIOSurface2D is much faster than CVOpenGLESTextureCacheCreateTextureFromImage() for getting CVPixelData into an OpenGL texture.
Ensure the CVPixelBuffers are IOSurface backed and in the right format:
videoOutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:#{
(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA),
(id)kCVPixelBufferIOSurfacePropertiesKey: #{},
}];
videoOutput.suppressesPlayerRendering = YES;
Glint texture;
glGenTextures(1, & texture);
Then to get each frame
pixelBuffer = [output copyPixelBufferForItemTime:currentTime itemTimeForDisplay:NULL];
if (NULL == pixelBuffer) return;
let surface = CVPixelBufferGetIOSurface(pixelBuffer);
if (!surface) return;
glBindTexture(GL_TEXTURE_RECTANGLE, texture);
CGLError error = CGLTexImageIOSurface2D(CGLGetCurrentContext(),
GL_TEXTURE_RECTANGLE,
GL_RGBA,
(int)IOSurfaceGetWidthOfPlane(surface, 0),
(int)IOSurfaceGetHeightOfPlane(surface, 0),
GL_BGRA,
GL_UNSIGNED_INT_8_8_8_8_REV,
surface,
0);
more info

Play a paused AVAudioRecorder file

in my program I want the user to be able to:
record his voice,
pause the recording process,
listen to what he recorded
and then continue recording.
I have managed to get to the point where I can record and play the recordings with AVAudioRecorder and AVAudioPlayer. But whenever I try to record, pause recording and then play, the playing part fails with no error.
I can guess that the reason it's not playing is because the audio file hasn't been saved yet and is still in memory or something.
Is there a way I can play paused recordings?
If there is please tell me how
I'm using xcode 4.3.2
If you want to play the recording, then yes you have to stop recording before you can load the file into the AVAudioPlayer instance.
If you want to be able to playback some of the recording, then add more to the recording after listening to it, or say record in the middle.. then you're in for some trouble.
You have to create a new audio file and then combine them together.
This was my solution:
// Generate a composition of the two audio assets that will be combined into
// a single track
AVMutableComposition* composition = [AVMutableComposition composition];
AVMutableCompositionTrack* audioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio
preferredTrackID:kCMPersistentTrackID_Invalid];
// grab the two audio assets as AVURLAssets according to the file paths
AVURLAsset* masterAsset = [[AVURLAsset alloc] initWithURL:[NSURL fileURLWithPath:self.masterFile] options:nil];
AVURLAsset* activeAsset = [[AVURLAsset alloc] initWithURL:[NSURL fileURLWithPath:self.newRecording] options:nil];
NSError* error = nil;
// grab the portion of interest from the master asset
[audioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, masterAsset.duration)
ofTrack:[[masterAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]
atTime:kCMTimeZero
error:&error];
if (error)
{
// report the error
return;
}
// append the entirety of the active recording
[audioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, activeAsset.duration)
ofTrack:[[activeAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]
atTime:masterAsset.duration
error:&error];
if (error)
{
// report the error
return;
}
// now export the two files
// create the export session
// no need for a retain here, the session will be retained by the
// completion handler since it is referenced there
AVAssetExportSession* exportSession = [AVAssetExportSession
exportSessionWithAsset:composition
presetName:AVAssetExportPresetAppleM4A];
if (nil == exportSession)
{
// report the error
return;
}
NSString* combined = #"combined file path";// create a new file for the combined file
// configure export session output with all our parameters
exportSession.outputURL = [NSURL fileURLWithPath:combined]; // output path
exportSession.outputFileType = AVFileTypeAppleM4A; // output file type
[exportSession exportAsynchronouslyWithCompletionHandler:^{
// export status changed, check to see if it's done, errored, waiting, etc
switch (exportSession.status)
{
case AVAssetExportSessionStatusFailed:
break;
case AVAssetExportSessionStatusCompleted:
break;
case AVAssetExportSessionStatusWaiting:
break;
default:
break;
}
NSError* error = nil;
// your code for dealing with the now combined file
}];
I can't take full credit for this work, but it was pieced together from the input of a couple of others:
AVAudioRecorder / AVAudioPlayer - append recording to file
(I can't find the other link at the moment)
We had the same requirements for our app as the OP described, and ran into the same issues (i.e., the recording has to be stopped, instead of paused, if the user wants to listen to what she has recorded up to that point). Our app (project's Github repo) uses AVQueuePlayer for playback and a method similar to kermitology's answer to concatenate the partial recordings, with some notable differences:
implemented in Swift
concatenates multiple recordings into one
no messing with tracks
The rationale behind the last item is that simple recordings with AVAudioRecorder will have one track, and the main reason for this whole workaround is to concatenate those single tracks in the assets (see Addendum 3). So why not use AVMutableComposition's insertTimeRange method instead, that takes an AVAsset instead of an AVAssetTrack?
Relevant parts: (full code)
import UIKit
import AVFoundation
class RecordViewController: UIViewController {
/* App allows volunteers to record newspaper articles for the
blind and print-impaired, hence the name.
*/
var articleChunks = [AVURLAsset]()
func concatChunks() {
let composition = AVMutableComposition()
/* `CMTimeRange` to store total duration and know when to
insert subsequent assets.
*/
var insertAt = CMTimeRange(start: kCMTimeZero, end: kCMTimeZero)
repeat {
let asset = self.articleChunks.removeFirst()
let assetTimeRange =
CMTimeRange(start: kCMTimeZero, end: asset.duration)
do {
try composition.insertTimeRange(assetTimeRange,
of: asset,
at: insertAt.end)
} catch {
NSLog("Unable to compose asset track.")
}
let nextDuration = insertAt.duration + assetTimeRange.duration
insertAt = CMTimeRange(start: kCMTimeZero, duration: nextDuration)
} while self.articleChunks.count != 0
let exportSession =
AVAssetExportSession(
asset: composition,
presetName: AVAssetExportPresetAppleM4A)
exportSession?.outputFileType = AVFileType.m4a
exportSession?.outputURL = /* create URL for output */
// exportSession?.metadata = ...
exportSession?.exportAsynchronously {
switch exportSession?.status {
case .unknown?: break
case .waiting?: break
case .exporting?: break
case .completed?: break
case .failed?: break
case .cancelled?: break
case .none: break
}
}
/* Clean up (delete partial recordings, etc.) */
}
This diagram helped me to get around what expects what and inherited from where. (NSObject is implicitly implied as superclass where there is no inheritance arrow.)
Addendum 1: I had my reservations regarding the switch part instead of using KVO on AVAssetExportSessionStatus, but the docs are clear that exportAsynchronously's callback block "is invoked when writing is complete or in the event of writing failure".
Addendum 2: Just in case if someone has issues with AVQueuePlayer: 'An AVPlayerItem cannot be associated with more than one instance of AVPlayer'
Addendum 3: Unless you are recording in stereo, but mobile devices have one input as far as I know. Also, using fancy audio mixing would also require the use of AVCompositionTrack. A good SO thread: Proper AVAudioRecorder Settings for Recording Voice?
RecordAudioViewController.h
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreAudio/CoreAudioTypes.h>
#interface record_audio_testViewController : UIViewController <AVAudioRecorderDelegate> {
IBOutlet UIButton * btnStart;
IBOutlet UIButton * btnPlay;
IBOutlet UIActivityIndicatorView * actSpinner;
BOOL toggle;
//Variables setup for access in the class:
NSURL * recordedTmpFile;
AVAudioRecorder * recorder;
NSError * error;
}
#property (nonatomic,retain)IBOutlet UIActivityIndicatorView * actSpinner;
#property (nonatomic,retain)IBOutlet UIButton * btnStart;
#property (nonatomic,retain)IBOutlet UIButton * btnPlay;
- (IBAction) start_button_pressed;
- (IBAction) play_button_pressed;
#end
RecordAudioViewController.m
#synthesize actSpinner, btnStart, btnPlay;
- (void)viewDidLoad {
[super viewDidLoad];
//Start the toggle in true mode.
toggle = YES;
btnPlay.hidden = YES;
//Instanciate an instance of the AVAudioSession object.
AVAudioSession * audioSession = [AVAudioSession sharedInstance];
//Setup the audioSession for playback and record.
//We could just use record and then switch it to playback leter, but
//since we are going to do both lets set it up once.
[audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error: &error];
//Activate the session
[audioSession setActive:YES error: &error];
}
- (IBAction) start_button_pressed{
if(toggle)
{
toggle = NO;
[actSpinner startAnimating];
[btnStart setTitle:#"Stop Recording" forState: UIControlStateNormal ];
btnPlay.enabled = toggle;
btnPlay.hidden = !toggle;
//Begin the recording session.
//Error handling removed. Please add to your own code.
//Setup the dictionary object with all the recording settings that this
//Recording sessoin will use
//Its not clear to me which of these are required and which are the bare minimum.
//This is a good resource: http://www.totodotnet.net/tag/avaudiorecorder/
NSMutableDictionary* recordSetting = [[NSMutableDictionary alloc] init];
[recordSetting setValue :[NSNumber numberWithInt:kAudioFormatAppleIMA4] forKey:AVFormatIDKey];
[recordSetting setValue:[NSNumber numberWithFloat:44100.0] forKey:AVSampleRateKey];
[recordSetting setValue:[NSNumber numberWithInt: 2] forKey:AVNumberOfChannelsKey];
//Now that we have our settings we are going to instanciate an instance of our recorder instance.
//Generate a temp file for use by the recording.
//This sample was one I found online and seems to be a good choice for making a tmp file that
//will not overwrite an existing one.
//I know this is a mess of collapsed things into 1 call. I can break it out if need be.
recordedTmpFile = [NSURL fileURLWithPath:[NSTemporaryDirectory() stringByAppendingPathComponent: [NSString stringWithFormat: #"%.0f.%#", [NSDate timeIntervalSinceReferenceDate] * 1000.0, #"caf"]]];
NSLog(#"Using File called: %#",recordedTmpFile);
//Setup the recorder to use this file and record to it.
recorder = [[ AVAudioRecorder alloc] initWithURL:recordedTmpFile settings:recordSetting error:&error];
//Use the recorder to start the recording.
//Im not sure why we set the delegate to self yet.
//Found this in antother example, but Im fuzzy on this still.
[recorder setDelegate:self];
//We call this to start the recording process and initialize
//the subsstems so that when we actually say "record" it starts right away.
[recorder prepareToRecord];
//Start the actual Recording
[recorder record];
//There is an optional method for doing the recording for a limited time see
//[recorder recordForDuration:(NSTimeInterval) 10]
}
else
{
toggle = YES;
[actSpinner stopAnimating];
[btnStart setTitle:#"Start Recording" forState:UIControlStateNormal ];
btnPlay.enabled = toggle;
btnPlay.hidden = !toggle;
NSLog(#"Using File called: %#",recordedTmpFile);
//Stop the recorder.
[recorder stop];
}
}
- (void)didReceiveMemoryWarning {
// Releases the view if it doesn't have a superview.
[super didReceiveMemoryWarning];
// Release any cached data, images, etc that aren't in use.
}
-(IBAction) play_button_pressed{
//The play button was pressed...
//Setup the AVAudioPlayer to play the file that we just recorded.
AVAudioPlayer * avPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:recordedTmpFile error:&error];
[avPlayer prepareToPlay];
[avPlayer play];
}
- (void)viewDidUnload {
// Release any retained subviews of the main view.
// e.g. self.myOutlet = nil;
//Clean up the temp file.
NSFileManager * fm = [NSFileManager defaultManager];
[fm removeItemAtPath:[recordedTmpFile path] error:&error];
//Call the dealloc on the remaining objects.
[recorder dealloc];
recorder = nil;
recordedTmpFile = nil;
}
- (void)dealloc {
[super dealloc];
}
#end
RecordAudioViewController.xib
take 2 Buttons. 1 for begin recording and another for Play recording