AVURLAsset for long sound on ipod - objective-c

long time reader, first time asker...
I am making a music app which uses AVAssetReader to read mp3 data from the itunes library. I need precise timing, so when I create an AVURLAsset, I use the "AVURLAssetPreferPreciseDurationAndTimingKey" to extract timing data. This has some overhead (and I have no problems when I don't use it, but I need it!)
Every thing works fine on iphone(4) and ipad(1). I would like it to work on my ipod touch (2nd gen). But it doesn't: if the sound file is too long (> ~7 minutes) then the AVAssetReader cannot start reading and throws an error ( AVFoundationErrorDomain error -11800. )
It appears that I am hitting a wall in terms of the scanter resources of the ipod touch. Any ideas what is happening, or how to manage the overhead of creating the AVURLAsset so that it can handle long files?
(I tried running this with the performance tools, and I don't see a major spike in memory).
Thanks, Dan

Maybe you're starting to read too son? As far as I understand, for mp3 it will need to go trough the entire file in order to to enable precise timing. So, try delaying the reading.
You can also try registering as an observer for some of the AVAsset properties. iOS 4.3 has 'readable' property. I've never tried it, but my guess would be it's initially set to NO and as soon as AVAsset has finished loading it gets set to YES.
EDIT:
Actually, just looked into the docs. You're supposed to use AVAsynchronousKeyValueLoading protocol for that and Apple provides an example
NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>;
AVURLAsset *anAsset = [[AVURLAsset alloc] initWithURL:url options:nil];
NSArray *keys = [NSArray arrayWithObject:#"duration"];
[asset loadValuesAsynchronouslyForKeys:keys completionHandler:^() {
NSError *error = nil;
AVKeyValueStatus durationStatus = [asset statusOfValueForKey:#"duration" error:&error];
switch (durationStatus) {
case AVKeyValueStatusLoaded:
[self updateUserInterfaceForDuration];
break;
case AVKeyValueStatusFailed:
[self reportError:error forAsset:asset];
break;
case AVKeyValueStatusCancelled:
// Do whatever is appropriate for cancelation.
break;
}
}];
If 'duration' won't help try 'readable' (but like I mentioned before 'readable' requires 4.3). Maybe this will solve your issue.

Related

Issues Playing A Sound Multiple Times

I'm using SpriteKit for my Mac OS project with Objective C and I'm trying to play a certain sound over and over again when contact between two nodes occurs. I don't want the player to wait for the sound to complete before playing it again. The sound is only about 1 second long, but it repeats as fast as every 0.5 seconds. I've tried two different methods and they both have issues. I'm probably not setting something up correctly.
Method #1 - SKAction
I tried getting one of the sprites to play the sound using the following code:
[playBarNode runAction:[SKAction playSoundFileNamed:#"metronome" waitForCompletion:NO]];
The sound plays perfectly on time, but the sound is modified. It sounds like reverb (echo) was applied to it and it has lost a lot of volume as well.
Method #2 - AVAudioPlayer
Here's the code I used to set this up:
-(void) initAudio {
NSString *path = [NSString stringWithFormat:#"%#/metronome.mp3", [[NSBundle mainBundle] resourcePath]];
NSURL *metronomeSound = [NSURL fileURLWithPath:path];
_audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:metronomeSound error:nil];
[_audioPlayer prepareToPlay];
}
Later on in code it is called like this:
[_audioPlayer play];
The issue with this one is that it seems to wait until it's completed playing the first time before playing the sound again. Basically it fails to play many times.
Am I setting this up incorrectly? How can I fix this? Thanks in advance.
Retry method 1, but instead of having the sprite play the sound, have the scene play the sound via
[self runAction:[SKAction playSoundFileNamed:#"metronome" waitForCompletion:NO]];
inside the game SKScene class

AVAudioPlayer on background thread

I know very little about using background threads, but this seems to play my sound in the way that I need it to as follows:
1) I need this very short sound effect to play repeatedly even if the sound overlaps.
2) I need the sound to be played perfectly on time.
3) I need the loading of the sound to not affect the on-screen graphics by stuttering.
I am currently just trying out this method with one sound, but if successful, I will roll it out to other sound effects that need the same treatment. My question is this: Am I using the background thread properly? Will there be any sort of memory leaks?
Here's the code:
-(void) playAudio {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSString *path = [NSString stringWithFormat:#"%#/metronome.mp3", [[NSBundle mainBundle] resourcePath]];
NSURL *metronomeSound = [NSURL fileURLWithPath:path];
_audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:metronomeSound error:nil];
[_audioPlayer prepareToPlay];
[_audioPlayer play];
});
}
//handles collision detection
-(void) didBeginContact:(SKPhysicsContact *)contact {
uint32_t categoryA = contact.bodyA.categoryBitMask;
uint32_t categoryB = contact.bodyB.categoryBitMask;
if (categoryA == kLineCategory || categoryB == kLineCategory) {
NSLog(#"line contact");
[self playAudio];
}
}
I use the AVAudioPlayer and use it asynchronously and in background threads without any problems and no leaks as far as I can tell. However, I have implemented a singleton class that handles all the allocations and keeps an array of AVAudioPlayer instances that also play asynchronously as needed. If you need to play a sound repeatedly, you should allocate an AVAudioPlayer instance for every time you want to play it. In that case, latency will be negligible and you can even play the same sound simultaneously.
Concerning your strategy I think it needs some refinements, in particular if you want to prevent any delays. The main problem is always reading from disk, which is the slowest operation of all and your limiting step.
Thus, I would also implement an array of AVAudioPlayers each already initialized to play a specific sound, in particular if this sound is played often and repeatedly. You could remove those instances of players that are played less often from the array if memory starts to grow and reload them a few seconds before if you can tell which ones will be needed.
And one more thing... Don't forget to lock and unlock the array, if you are going to access it from multiple threads or better yet, create a GCD queue to handle all accesses to the array.

Cache thousands of images

I've been developing a music player recently, I'm writing my own pickers.
I'm trying to test my code to it's limits, so I have around 1600 albums in my iPhone.
I'm using AQGridView for albums view, and since MPMediaItemArtwork is a subclass of NSObject, you need to fire up a method on it to get an image from it, and that method scales images.
Scaling for each cell uses too much CPU as you can guess, so my grid album view is laggy, despite all my effort manually driving each cell's includes.
So I thought of start scaling with GCD on app launch, then save them to file, and read that file for each cell.
But, my code
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^ {
MPMediaQuery *allAlbumsQuery = [MPMediaQuery albumsQuery];
NSArray *albumsArray = allAlbumsQuery.collections;
for (MPMediaItemCollection *collection in albumsArray) {
#autoreleasepool {
MPMediaItem *currentItem = [collection representativeItem];
MPMediaItemArtwork *artwork = [currentItem valueForProperty:MPMediaItemPropertyArtwork];
UIImage *artworkImage = [artwork imageWithSize:CGSizeMake(90, 90)];
if (artworkImage) [toBeCached addObject:artworkImage];
else [toBeCached addObject:blackImage];
NSLog(#"%#", [currentItem valueForProperty:MPMediaItemPropertyAlbumTitle]);
artworkImage = nil;
}
}
dispatch_async(dispatch_get_main_queue(), ^{
[[NSUserDefaults standardUserDefaults] setObject:[NSKeyedArchiver archivedDataWithRootObject:albumsArray] forKey:#"covers"];
});
NSLog(#"finished saving, sir");
});
in AppDelegate's application:didFinishLaunchingWithOptions: method makes my app crash, without any console log etc.
This seems to be a memory problem, so many images are kept in NSArray which is stored on RAM until saving that iOS force closes my app.
Do you have any suggestions on what to do?
Cheers
Take a look at the recently-released SYCache, which combines NSCache and on-disk caching. It's probably a bad idea to get to a memory-warning state as soon as you launch the app, but that's better than force closing.
As far as the commenter above suggested, mapped data is a technique (using mmap or its equivalent) to load data from disk as if it's all in memory at once, which could help with UIImage loading later on down the road. The inverse (with NSMutableData) is also true, that a file is able to be written to as if it's directly in RAM. As a technique, it could be useful.

How can I handle separate tracks of an AVURLAsset independently?

Here's my goal: I would like to load a .3gp movie file into an AVURLAsset. I would then like to take the video track and pump the output frames into an OpenGL ES texture. This will be the video playback. I would then like to continue leveraging AVFoundation to play back the audio. The framework is pretty vast, so I'm hoping for some veteran assistance on this one.
I actually have both parts working separately, but something always goes wrong when I try to do both at the same time. Here's my current attempt, in a nutshell (All error handling is omitted for brevity):
I load the .3gp file into the AVURLAsset and load the tracks:
NSURL* fileURL = [[NSBundle mainBundle] URLForResource:someName withExtension:someExtension];
AVURLAsset* asset = [AVURLAsset URLAssetWithURL:fileURL options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:#"tracks"] completionHandler:^ {/* More Code */}];
In the completion handler, I get a reference to the audio and video track:
// Tracks loaded, grab the audio and video tracks.
AVAssetTrack* videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack* audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
Next, I create separate AVMutableCompositions that contain just the audio track and just the video track. I'm not sure if this is completely necessary, but it seems like a good idea and it does also seem to work:
// Make a composition with the video track.
AVMutableComposition* videoComposition = [AVMutableComposition composition];
AVMutableCompositionTrack* videoCompositionTrack = [videoComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[videoCompositionTrack insertTimeRange:[videoTrack timeRange] ofTrack:videoTrack atTime:CMTimeMake(0, 1) error:nil];
// Make a composition with the audio track.
AVMutableComposition* audioComposition = [AVMutableComposition composition];
AVMutableCompositionTrack* audioCompositionTrack = [audioComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[audioCompositionTrack insertTimeRange:[audioTrack timeRange] ofTrack:audioTrack atTime:CMTimeMake(0, 1) error:nil];
Now I get into specifics of how to handle each track. I'm fairly confident that I have the one-and-only way of handling the video track, which is to create an AVAssetReader for the video composition, and add an AVAssetTrackReaderOutput that was created with the video composition track. By keeping a reference to that track output, I can call its -copyNextSampleBuffer method to get the info I need to pump the video output into an OpenGL ES texture. This works well enough by itself:
// Create Asset Reader and Output for the video track.
NSDictionary* settings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(NSString *)kCVPixelBufferPixelFormatTypeKey];
_assetReader = [[AVAssetReader assetReaderWithAsset:vComposition error:nil] retain];
_videoTrackOutput = [[AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:vCompositionTrack outputSettings:settings] retain];
[_assetReader addOutput:_videoTrackOutput];
[_assetReader startReading];
What seems to spoil the whole thing is attempting to play back the audio in any way. I'm not really sure which approach to take for the remaining audio track. Just sticking to the realm of AVFoundation, I see two possible approaches. The first is to use an AVPlayer to play the audio composition:
// Create a player for the audio.
AVPlayerItem* audioPlayerItem = [AVPlayerItem playerItemWithAsset:aComposition];
AVPlayer* audioPlayer = [[AVPlayer playerWithPlayerItem:audioPlayerItem] retain];
[audioPlayer play];
This works, inasmuch as I can hear the desired audio. Unfortunately creating this player guarantees that the AVAssetReaderTrackOutput for the video composition fails with a cryptic error when calling -copyNextSampleBuffer:
AVAssetReaderStatusFailed
Error Domain=AVFoundationErrorDomain
Code=-11800 "The operation could not
be completed" UserInfo=0x456e50
{NSLocalizedFailureReason=An unknown
error occurred (-12785),
NSUnderlyingError=0x486570 "The
operation couldn’t be completed.
(OSStatus error -12785.)",
NSLocalizedDescription=The operation
could not be completed}
I'm confused about how they might be interfering with each other, but regardless, that approach seems to be a dead end.
The other option I considered for the audio playback was the AVAudioPlayer class, but I could not get it to work with an AVAsset as a starting point. I attempted to use its -initWithData:error: method with an NSData built by aggregating the contents of CMSampleBufferRefs taken with an approach identical to the one I use on the video track, but it does not appear to be formatted correctly.
At this point, I feel like I'm flailing around blindly, and would love it so very much if someone could tell me if this approach is even feasible. If it's not I would, of course, appreciate a feasible one.
Creating AVMutableCompositions (basically new AVAssets) for each track seems round-about to me, I'd simply use an AVAssetReader on the audio track. Also, your videoComposition doesn't seem to be used anywhere, so why create it?
In any case, to get either solution to work, set your audio session category to kAudioSessionCategory_MediaPlayback and enable kAudioSessionProperty_OverrideCategoryMixWithOthers.
I've never found any documentation that explains why this is necessary.

How to make QTMovie play file from URL with forced (MP3) type?

I'm using QTKit to progressively download and play an MP3 from a URL. According to this documentation, this is the code I should use to accomplish that:
NSURL *mp3URL = [NSURL URLWithString:#"http://foo.com/bar.mp3"];
NSError *error = nil;
QTMovie *sound = [[QTMovie alloc] initWithURL:mp3URL error:&error];
[sound play];
This works, and does exactly what I want — the MP3 URL is lazily downloaded and starts playing immediately. However, if the URL does not have the ".mp3" path extension, it fails:
NSURL *mp3URL = [NSURL URLWithString:#"http://foo.com/bar"];
NSError *error = nil;
QTMovie *sound = [[QTMovie alloc] initWithURL:mp3URL error:&error];
[sound play];
No error is given, no exception is raised; the duration of the sound is just set to zero, and nothing plays.
The only way I have found to work around this is to force a type by loading the data manually and using a QTDataReference:
NSURL *mp3URL = [NSURL URLWithString:#"http://foo.com/bar"];
NSData *mp3Data = [NSData dataWithContentsOfURL:mp3URL];
QTDataReference *dataReference =
[QTDataReference dataReferenceWithReferenceToData:mp3Data
name:#"bar.mp3"
MIMEType:nil];
NSError *error = nil;
QTMovie *sound = [[QTMovie alloc] initWithDataReference:dataReference error:&error];
[sound play];
However, this forces me to completely download ALL of the MP3 synchronously before I can start playing it, which is obviously undesirable. Is there any way around this?
Thanks.
Edit
Actually, it seems that the path extension has nothing to do with it; the Content-Type is simply not being set in the HTTP header. Even so, the latter code works and the former does not. Anyone know of a way to fix this, without having access to the server?
Edit 2
Anyone? I can't find information about this anywhere, and Google frustratingly now shows this page as the top result for most of my queries...
Two ideas. (The first one being a bit hacky):
To work around the missing content type, you could embed a small Cocoa webserver that supplements the missing header field and route your NSURL over that "proxy".
Some Cocoa http server implementations:
http://code.google.com/p/cocoahttpserver/
http://cocoawithlove.com/2009/07/simple-extensible-http-server-in-cocoa.html
http://culturedcode.com/cocoa/
The second one would be, to switch to a lower level framework (From QTKit to AudioToolbox).
You'd need more code, but there are some very good resources out there on how to stream mp3 using AudioToolbox.
e.g.:
http://cocoawithlove.com/2008/09/streaming-and-playing-live-mp3-stream.html
Personally I'd go with the second option. AudioToolbox isn't as straightforward as QTKit but it offers a clean solution to your problem. It's also available on both - iOS and Mac OS - so you will find plenty of information.
Update:
Did you try to use another initializer? e.g.
+ (id)movieWithAttributes:(NSDictionary *)attributes error:(NSError **)errorPtr
You can insert your URL for the key QTMovieURLAttribute and maybe you can compensate the missing content type by providing other attributes in that dictionary.
This open source project has a QTMovie category that contains methods to accomplish similar things:
http://vidnik.googlecode.com/svn-history/r63/trunk/Source/Categories/QTMovie+Async.m
If you thought weichsel's first solution was hacky, you're going to love this one:
The culprit is the Content-Type header, as you have determined. Had QTKit.framework used Objective-C internally, this would be a trivial matter of overriding -[NSHTTPURLResponse allHeaderFields] with a category of your choosing. However, QTKit.framework (for better or worse) uses Core Foundation (and Core Services) internally. These are both C-based frameworks and there is no elegant way of overriding functions in C.
That said, there is a method, just not a pretty one. Function interposition is even documented by Apple, but seems to be a bit behind the times, compared to the remainder of their documentation.
In essence, you want something along the following lines:
typedef struct interpose_s {
void *new_func;
void *orig_func;
} interpose_t;
CFStringRef myCFHTTPMessageCopyHeaderFieldValue (
CFHTTPMessageRef message,
CFStringRef headerField
);
static const interpose_t interposers[] __attribute__ ((section("__DATA, __interpose"))) = {
{ (void *)myCFHTTPMessageCopyHeaderFieldValue, (void *)CFHTTPMessageCopyHeaderFieldValue }
};
CFStringRef myCFHTTPMessageCopyHeaderFieldValue (
CFHTTPMessageRef message,
CFStringRef headerField
) {
if (CFStringCompare(headerField, CFSTR("Content-Type"), 0) == kCFCompareEqualTo) {
return CFSTR("audio/x-mpeg");
} else {
return CFHTTPMessageCopyHeaderFieldValue(message, headerField);
}
}
You might want to add logic specific to your application in terms of handling the Content-Type field lest your application break in weird and wonderful ways when every HTTP request is determined to be an audio file.
Try replacing http:// with icy://.
Just create an instance like this...
QTMovie *aPlayer = [QTMovie movieWithAttributes:[NSDictionary dictionaryWithObjectsAndKeys:
fileUrl, QTMovieURLAttribute,
[NSNumber numberWithBool:YES], QTMovieOpenForPlaybackAttribute,
/*[NSNumber numberWithBool:YES], QTMovieOpenAsyncOKAttribute,*/
nil] error:error];