I know very little about using background threads, but this seems to play my sound in the way that I need it to as follows:
1) I need this very short sound effect to play repeatedly even if the sound overlaps.
2) I need the sound to be played perfectly on time.
3) I need the loading of the sound to not affect the on-screen graphics by stuttering.
I am currently just trying out this method with one sound, but if successful, I will roll it out to other sound effects that need the same treatment. My question is this: Am I using the background thread properly? Will there be any sort of memory leaks?
Here's the code:
-(void) playAudio {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSString *path = [NSString stringWithFormat:#"%#/metronome.mp3", [[NSBundle mainBundle] resourcePath]];
NSURL *metronomeSound = [NSURL fileURLWithPath:path];
_audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:metronomeSound error:nil];
[_audioPlayer prepareToPlay];
[_audioPlayer play];
});
}
//handles collision detection
-(void) didBeginContact:(SKPhysicsContact *)contact {
uint32_t categoryA = contact.bodyA.categoryBitMask;
uint32_t categoryB = contact.bodyB.categoryBitMask;
if (categoryA == kLineCategory || categoryB == kLineCategory) {
NSLog(#"line contact");
[self playAudio];
}
}
I use the AVAudioPlayer and use it asynchronously and in background threads without any problems and no leaks as far as I can tell. However, I have implemented a singleton class that handles all the allocations and keeps an array of AVAudioPlayer instances that also play asynchronously as needed. If you need to play a sound repeatedly, you should allocate an AVAudioPlayer instance for every time you want to play it. In that case, latency will be negligible and you can even play the same sound simultaneously.
Concerning your strategy I think it needs some refinements, in particular if you want to prevent any delays. The main problem is always reading from disk, which is the slowest operation of all and your limiting step.
Thus, I would also implement an array of AVAudioPlayers each already initialized to play a specific sound, in particular if this sound is played often and repeatedly. You could remove those instances of players that are played less often from the array if memory starts to grow and reload them a few seconds before if you can tell which ones will be needed.
And one more thing... Don't forget to lock and unlock the array, if you are going to access it from multiple threads or better yet, create a GCD queue to handle all accesses to the array.
Related
I'm using SpriteKit for my Mac OS project with Objective C and I'm trying to play a certain sound over and over again when contact between two nodes occurs. I don't want the player to wait for the sound to complete before playing it again. The sound is only about 1 second long, but it repeats as fast as every 0.5 seconds. I've tried two different methods and they both have issues. I'm probably not setting something up correctly.
Method #1 - SKAction
I tried getting one of the sprites to play the sound using the following code:
[playBarNode runAction:[SKAction playSoundFileNamed:#"metronome" waitForCompletion:NO]];
The sound plays perfectly on time, but the sound is modified. It sounds like reverb (echo) was applied to it and it has lost a lot of volume as well.
Method #2 - AVAudioPlayer
Here's the code I used to set this up:
-(void) initAudio {
NSString *path = [NSString stringWithFormat:#"%#/metronome.mp3", [[NSBundle mainBundle] resourcePath]];
NSURL *metronomeSound = [NSURL fileURLWithPath:path];
_audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:metronomeSound error:nil];
[_audioPlayer prepareToPlay];
}
Later on in code it is called like this:
[_audioPlayer play];
The issue with this one is that it seems to wait until it's completed playing the first time before playing the sound again. Basically it fails to play many times.
Am I setting this up incorrectly? How can I fix this? Thanks in advance.
Retry method 1, but instead of having the sprite play the sound, have the scene play the sound via
[self runAction:[SKAction playSoundFileNamed:#"metronome" waitForCompletion:NO]];
inside the game SKScene class
This is the code I'm using now, and it's not working (nothing happens when I press the button that calls this method). Previously, I had a property for audioPlayer and it worked (all the audioPlayers below were self.audioPlayer obviously). The problem was that when I tried to play the sound twice, it would end the first sound playing.
This is no good because I'm making a soundboard and want sounds to be able to overlap. I thought I could just make audioPlayer a local variable instead of a property and all would be ok but now the sound doesn't work at all and I can't figure out why. In all tutorials I've found for AVAudioPlayer, a property is made but no one explains why. If this can't work, what alternatives do I have to make sounds that can overlap?
- (void)loadSound:(NSString *)sound ofType:(NSString *)type withDelegate:(BOOL)delegate {
NSURL *url = [NSURL fileURLWithPath:[[NSBundle mainBundle]
pathForResource:sound
ofType:type]];
AVAudioPlayer *audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:nil];
if (delegate) audioPlayer.delegate = self;
[audioPlayer prepareToPlay];
[audioPlayer play];
}
The reason you need a property or ivar is for the strong reference it provides. When using ARC any object without a strong pointer to it is fair game for deallocation, and in fact that is what you are seeing.
You are also correct that an AVAudioPlayer strong pointer will only allow one audio player to be referenced at a time.
The solution, if you choose to continue to use AVAudioPlayer is to use some sort of collection object to hold strong reference to all the player instances. You could use an NSMutableArray as shown here:
Edit I tweaked the code slightly so method that plays the sound takes an NSString soundName parameter.
#synthesize audioPlayers = _audioPlayers;
-(NSMutableArray *)audioPlayers{
if (!_audioPlayers){
_audioPlayers = [[NSMutableArray alloc] init];
}
return _audioPlayers;
}
-(void)audioPlayerDidFinishPlaying:(AVAudioPlayer *)player successfully:(BOOL)flag{
[self.audioPlayers removeObject:player];
}
-(void)playSoundNamed:(NSString *)soundName{
NSURL *url = [NSURL fileURLWithPath:[[NSBundle mainBundle]
pathForResource:soundName
ofType:#"wav"]];
AVAudioPlayer *audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:nil];
if (audioPlayer){
[audioPlayer setDelegate:self];
[audioPlayer prepareToPlay];
[audioPlayer play];
[self.audioPlayers addObject:audioPlayer];
}
}
Generally an AVAudioPlayer is overkill for an sound-effect/soundboard application. For quick sound "drops" you will likely find the audio toolbox framework, as outlined in my answer to this question.
From looking at the System Sound class reference, it seems like you
can only play one sound at a time.
It can only play one SystemSoundID at a time. So for example if you have soundOne and soundTwo. You can play soundOne while soundTwo is playing, but you cannot play more than one instance of either sound at a time.
What's the best way to be able to play sounds that can overlap while
still being efficient with the amount of code and memory?
Best is opinion.
If you need two instances of the same sound to play at the same time, then I would say the code posted in this answer would be the code to use. Due to the fact that each overlapping instance of the same sound requires creating a new resource, code like this with its audioPlayerDidFinishPlaying: is much more manageable(the memory can easily be reclaimed).
If overlapping instances of the same sound are not a deal-breaker then I think just using AudioServicesCreateSystemSoundID() to create one instance of each sound is more efficient.
I definitely would not try to manage the creation of and disposal of SystemSoundIDs with each press of a button. That would go wrong in a hurry. In that instance AVAudioPlayer is the clear winner on just maintainability alone.
I am assuming you are using ARC. The reason that the audio player doesn't work is because the AVAudioPlayer object is being released and then subsequently destroyed once the loadSound: method terminates. This is happening due to ARC's object management. Before ARC, the code:
AVAudioPlayer *audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:nil];
if (delegate) audioPlayer.delegate = self;
[audioPlayer prepareToPlay];
[audioPlayer play];
would play the sound as expected. However, the AVAudioPlayer object would still exist long after the loadSound: method terminates. That means every time you play a sound, you would be leaking memory.
A little something about properties
Properties were introduced to reduce the amount of code the developer had to write and maintain. Before properties, a developer would have to hand write the setters and getters for each of their instance variables. That's a lot of redundant code. A fortunate side-effect of properties was that they took care of a lot of the memory management code needed to write setters/getters for object-based instance variables. This meant that a lot of developers started using properties exclusively, even for variables that didn't need to be public.
Since ARC handles all the memory management details for you, properties should only be used for their original purpose, cutting down on the amount of redundant code. Traditional iVars will be strongly referenced by default, which means a simple assignment such as:
title = [NSString stringWithFormat:#"hello"];
is essentially the same as the code:
self.title = [NSString stringWithFormat:#"hello"];.
OK, back to your question.
If you are going to be creating AVAudioPlayer instances in the loadSound: method, you'll need to keep a strong reference to each AVAudioPlayer instance or else ARC will destroy it. I suggest adding the newly created AVAudioPlayer objects into a NSMutableArray array. If you adopt the AVAudioPlayerDelegate protocol, you can implement the audioPlayerDidFinishPlaying:successfully: method. In which you can remove the AVAudioPlayer object from the array, letting ARC know that it's OK to destroy the object.
I've been developing a music player recently, I'm writing my own pickers.
I'm trying to test my code to it's limits, so I have around 1600 albums in my iPhone.
I'm using AQGridView for albums view, and since MPMediaItemArtwork is a subclass of NSObject, you need to fire up a method on it to get an image from it, and that method scales images.
Scaling for each cell uses too much CPU as you can guess, so my grid album view is laggy, despite all my effort manually driving each cell's includes.
So I thought of start scaling with GCD on app launch, then save them to file, and read that file for each cell.
But, my code
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^ {
MPMediaQuery *allAlbumsQuery = [MPMediaQuery albumsQuery];
NSArray *albumsArray = allAlbumsQuery.collections;
for (MPMediaItemCollection *collection in albumsArray) {
#autoreleasepool {
MPMediaItem *currentItem = [collection representativeItem];
MPMediaItemArtwork *artwork = [currentItem valueForProperty:MPMediaItemPropertyArtwork];
UIImage *artworkImage = [artwork imageWithSize:CGSizeMake(90, 90)];
if (artworkImage) [toBeCached addObject:artworkImage];
else [toBeCached addObject:blackImage];
NSLog(#"%#", [currentItem valueForProperty:MPMediaItemPropertyAlbumTitle]);
artworkImage = nil;
}
}
dispatch_async(dispatch_get_main_queue(), ^{
[[NSUserDefaults standardUserDefaults] setObject:[NSKeyedArchiver archivedDataWithRootObject:albumsArray] forKey:#"covers"];
});
NSLog(#"finished saving, sir");
});
in AppDelegate's application:didFinishLaunchingWithOptions: method makes my app crash, without any console log etc.
This seems to be a memory problem, so many images are kept in NSArray which is stored on RAM until saving that iOS force closes my app.
Do you have any suggestions on what to do?
Cheers
Take a look at the recently-released SYCache, which combines NSCache and on-disk caching. It's probably a bad idea to get to a memory-warning state as soon as you launch the app, but that's better than force closing.
As far as the commenter above suggested, mapped data is a technique (using mmap or its equivalent) to load data from disk as if it's all in memory at once, which could help with UIImage loading later on down the road. The inverse (with NSMutableData) is also true, that a file is able to be written to as if it's directly in RAM. As a technique, it could be useful.
I have a UITableViewController that when opened displays a table of the following object:
class {
NSString *stringVal;
int value;
}
However, whenever this controller opens, I want it to download the data from the internet and display "Connecting..." in the status bar and refresh the stringVal and value of all of the objects. I do this by refreshing the array in the UITableViewController. However, to do this the UI hangs sometimes or even displays "blank" table cells until the operation has ended. I'm doing this in an NSOperationQueue to download the data, but I'm wondering if there's a better way to refresh the data without those weird UI bugs.
EDIT:
the UI no longer displays blank cells. This was because cellForRowAtIndexPath was setting nil values for my cellText. However, it still seems somewhat laggy when tableView.reloadData is called even though I'm using NSOperationQueue.
EDIT2:
Moreover, I have two problems: 1. the scrolling prevents the UI from being updated and 2. when the scrolling does stop and the UI starts to update, it hangs a little bit. A perfect example of what I'm trying to do can be found in the native Mail app when you view a list of folders with their unread count. If you constantly scroll the tableview, the folders unread count will be updated without any hanging at all.
Based on your response in the question comments, it sounds like you are calling [tableView reloadData] from a background thread.
Do not do this. UIKit methods, unless otherwise specified, always need to be called from the main thread. Failing to do so can cause no end of problems, and you are probably seeing one of them.
EDIT: I misread your comment. It sounds like you are not updating the UI from a background thread. But my comments about the architecture (i.e. why are you updating in a background thread AFTER the download has finished?).
You state that "when the data comes back from the server, I call a background operation..." This sounds backwards. Normally you would have your NSURLConnection (or whatever you are using for the download) run on the background thread so as not to block to UI, then call out to the main thread to update the data model and refresh the UI. Alternatively, use an asynchronous NSURLConnection (which manages its own background thread/queue), e.g.:
[NSURLConnection sendAsynchronousRequest:(NSURLRequest *)
requestqueue:(NSOperationQueue *)queue
completionHandler:(void (^)(NSURLResponse*, NSData*, NSError*))handler];
And just make sure to use [NSOperationQueue mainQueue] for the queue.
You can also use GCD, i.e., nested dispatch_async() calls (the outer to a background queue for handling a synchronous connection, the inner on the main queue to handle the connection response).
Finally, I will note that you in principle can update your data model on the background thread and just refresh the UI from the main thread. But this means that you need to take care to make your model code thread-safe, which you are likely to mess up at least a couple times. Since updating the model is probably not a time consuming step, I would just do it on the main thread too.
EDIT:
I am adding an example of how one might use GCD and synchronous requests to accomplish this. Clearly there are many ways to accomplish non-blocking URL requests, and I do not assert that this is the best one. It does, in my opinion, have the virtue of keeping all the code for processing a request in one place, making it easier to read.
The code has plenty of rough edges. For example, creating a custom dispatch queue is not generally necessary. It blindly assumes UTF-8 encoding of the returned web page. And none of the content (save the HTTP error description) is localized. But it does demonstrate how to run non-blocking requests and detect errors (both at the network and HTTP layers). Hope this is helpful.
NSURL *url = [NSURL URLWithString:#"http://www.google.com"];
NSURLRequest *request = [NSURLRequest requestWithURL:url];
dispatch_queue_t netQueue = dispatch_queue_create("com.mycompany.netqueue", DISPATCH_QUEUE_SERIAL);
dispatch_async(netQueue,
^{
// We are on a background thread, so we won't block UI events (or, generally, the main run loop)
NSHTTPURLResponse *response;
NSError *error;
NSData *data = [NSURLConnection sendSynchronousRequest:request
returningResponse:&response
error:&error];
dispatch_async(dispatch_get_main_queue(),
^{
// We are now back on the main thread
UIAlertView *alertView = [[UIAlertView alloc] init];
[alertView addButtonWithTitle:#"OK"];
if (data) {
if ([response statusCode] == 200) {
NSMutableString *body = [[NSMutableString alloc] initWithData:data
encoding:NSUTF8StringEncoding];
[alertView setTitle:#"Success"];
[alertView setMessage:body];
}
else {
[alertView setTitle:#"HTTP Error"];
NSString *status = [NSHTTPURLResponse localizedStringForStatusCode:[response statusCode]];
[alertView setMessage:status];
}
}
else {
[alertView setTitle:#"Error"];
[alertView setMessage:#"Unable to load URL"];
}
[alertView show];
[alertView release];
});
});
dispatch_release(netQueue);
EDIT:
Oh, one more big rough edge. The above code assumes that any HTTP status code != 200 is an error. This is not necessarily the case, but handling this is beyond the scope of this question.
long time reader, first time asker...
I am making a music app which uses AVAssetReader to read mp3 data from the itunes library. I need precise timing, so when I create an AVURLAsset, I use the "AVURLAssetPreferPreciseDurationAndTimingKey" to extract timing data. This has some overhead (and I have no problems when I don't use it, but I need it!)
Every thing works fine on iphone(4) and ipad(1). I would like it to work on my ipod touch (2nd gen). But it doesn't: if the sound file is too long (> ~7 minutes) then the AVAssetReader cannot start reading and throws an error ( AVFoundationErrorDomain error -11800. )
It appears that I am hitting a wall in terms of the scanter resources of the ipod touch. Any ideas what is happening, or how to manage the overhead of creating the AVURLAsset so that it can handle long files?
(I tried running this with the performance tools, and I don't see a major spike in memory).
Thanks, Dan
Maybe you're starting to read too son? As far as I understand, for mp3 it will need to go trough the entire file in order to to enable precise timing. So, try delaying the reading.
You can also try registering as an observer for some of the AVAsset properties. iOS 4.3 has 'readable' property. I've never tried it, but my guess would be it's initially set to NO and as soon as AVAsset has finished loading it gets set to YES.
EDIT:
Actually, just looked into the docs. You're supposed to use AVAsynchronousKeyValueLoading protocol for that and Apple provides an example
NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>;
AVURLAsset *anAsset = [[AVURLAsset alloc] initWithURL:url options:nil];
NSArray *keys = [NSArray arrayWithObject:#"duration"];
[asset loadValuesAsynchronouslyForKeys:keys completionHandler:^() {
NSError *error = nil;
AVKeyValueStatus durationStatus = [asset statusOfValueForKey:#"duration" error:&error];
switch (durationStatus) {
case AVKeyValueStatusLoaded:
[self updateUserInterfaceForDuration];
break;
case AVKeyValueStatusFailed:
[self reportError:error forAsset:asset];
break;
case AVKeyValueStatusCancelled:
// Do whatever is appropriate for cancelation.
break;
}
}];
If 'duration' won't help try 'readable' (but like I mentioned before 'readable' requires 4.3). Maybe this will solve your issue.