I am doing an application that involves playing back a song in a multi track format (drums, vocals, guitar, piano, etc...). I don't need to do any fancy audio processing to each track, all I need to be able to do is play, pause, and mute/unmute each track.
I had been using multiple instances of AVAudioPlayer but when performing device testing, I noticed that the tracks are playing very slightly out of sync when they are first played. Furthermore, when I pause and play the tracks they continue to get more out of sync. After a bit of research I've realized that AVAudioplayer just has too much latency and won't work for my application.
In my application I basically had an NSArray of AVAudioPlayers that I would loop through and play each one or pause/stop each one, I'm sure this is what caused it to get out of sync on the device.
It seemed like apple's audio mixer would work well for me, but when I try implementing it I get a EXC_BAD_ACCESS error that I can't figure out.
I know the answer is to use OpenAL or audio units but It just seems unnecessary to spend weeks learning about these when all I need to do is play around 5 .mp3 tracks at the same time. Does anyone have any suggestions on how to accomplish this? Thanks
thanks to admsyn's suggestion I was able to come up with a solution.
AVAudioPlayer has a currentTime property that returns the current time of a track and can also be set.
So I implemented the startSynchronizedPlayback as stated by admsyn and then added the following when I stopped the tracks:
-(void) stopAll
{
int count = [tracksArr count];
for(int i = 0; i < count; i++)
{
trackModel = [tracksArr objectAtIndex:i]
if(i = 0)
{
currentTime = [trackModel currentTime]
}
[trackModel stop]
[trackModel setCurrentTime:currentTime]
}
{
This code basically loops through my array of tracks which each hold their own AVAudioPlayer, grabs the current time from the first track, then sets all of the following tracks to that time. Now when I use the startSynchronizedPlayback method they all play in sync, and pausing unpausing keeps them in sync as well. Hope this is helpful to someone else trying to keep tracks in sync.
If you're issuing individual play messages to each AVAudioPlayer, it is entirely likely that the messages are arriving at different times, or that the AVAudioPlayers finish their warm up phase out of sync with each other. You should be using playAtTime: and the deviceCurrentTime property to achieve proper synchronization. Note the description of deviceCurrentTime:
Use this property to indicate “now” when calling the playAtTime: instance method. By configuring multiple audio players to play at a specified offset from deviceCurrentTime, you can perform precise synchronization—as described in the discussion for that method.
Also note the example code in the playAtTime: discussion:
// Before calling this method, instantiate two AVAudioPlayer objects and
// assign each of them a sound.
- (void) startSynchronizedPlayback {
NSTimeInterval shortStartDelay = 0.01; // seconds
NSTimeInterval now = player.deviceCurrentTime;
[player playAtTime: now + shortStartDelay];
[secondPlayer playAtTime: now + shortStartDelay];
// Here, update state and user interface for each player, as appropriate
}
If you are able to decode the files to disk, then audio units are probably the solution which would provide the best latency. If you decide to use such an architecture, you should also check out Novocaine:
https://github.com/alexbw/novocaine
That framework takes a lot of the headache out of dealing with audio units.
Related
I've looked at a lot of topics but I still can't figure it out.
I have a UITableview which downloads its content online. Each cell has an image, and I use GCD to let the image download. The downloaded image will be saved to disk, and before each time a cell is loaded there is checked if the file already exist, if not -> gcd, nsdata etc.
All goes well if someone has a good internet connection (wifi), but if I'm going to hop from View to View (back and forth), with my crappy 3G connection, what happens is that it wants to finish its queue (about 4 cells), but already gets assigned a new one, and a new one, and a new one and eventually the user has to wait a looong time before the others are executed (which he doesnt see) before the actual UITableview gets populated. With NSLog I can see that even I'm in a different view, it's still downloading and making uiimages that were visible on the screen. Each task is approximately 100 kb, and with a slow (or even no internet connection?!) it can take a while if you have a lot.
I know it's not possible to cancel it, but I read in other topics about using a BOOL variable but I don't really get it. Even if the BOOL variable change when the user leaves the screen, the cells are already in queue right?
Is it possible that when a user taps the back button in my Navigationcontroller, so he leaves the view, I change the data the blocks in queue use (empty it), so there is nothing to download and the blocks will be executed right away (there is nothing to do). So something like, making every value in array newsitems nil? Is it possible to change the datasource, or will the blocks that are waiting already have their datasource with them while waiting?
Then there is another problem, this doesn't have effect on the the currently executed block.
Can someone point me in a good direction?
Thank you.
Prastow
You can make use of NSBlockOperation and NSOperationQueue to create a cancellable download task. You create an NSBlockOperation by giving it a block which performs some work. In your case the block would download the contents of the URL.
In your view controller, you would store a list of the operations that have been submitted to the queue. If the user decides to leave the current view, you can then call cancel on each of the pending operations to prevent any needless work from taking place. The currently running operation will run to completion however. In order to cancel the currently running operation, you need to store a weak reference to the NSOperation object in the block doing teh work. Then at appropriate intervals within the body of the block, you can check to see if the operation has been cancelled and exit early.
// Create a queue on which to run the downloads
NSOperationQueue* queue = [NSOperationQueue new];
// Create an operation without any work to do
NSBlockOperation* downloadImageOperation = [NSBlockOperation new];
// Make a weak reference to the operation. This is used to check if the operation
// has been cancelled from within the block
__weak NSBlockOperation* operation = downloadImageOperation;
// The url from which to download the image
NSURL* imageURL = [NSURL URLWithString:#"http://www.someaddress.com/image.png"];
// Give the operation some work to do
[downloadImageOperation addExecutionBlock: ^() {
// Download the image
NSData* imageData = [NSData dataWithContentsOfURL:imageURL];
// Make sure the operation was not cancelled whilst the download was in progress
if (operation.isCancelled) {
return;
}
// Do something with the image
}];
// Schedule the download by adding the download operation to the queue
[queue addOperation:imageDownloadOperation];
// As necessary
// Cancel the operation if it is not already running
[imageDownloadOperation cancel];
A good talk on this exact topic was given at WWDC this year entitled "Building Concurrent User Interfaces on iOS". You can find the video and slides here
I faced similar issues with an app I developed a while back and found that the best way to do everything you require, and more, is to use https://github.com/MugunthKumar/MKNetworkKit
It took me the best part of a day to learn and understand the conversion and then a couple more days to tweak it to exactly what I needed.
If you do decide to use it or would like a thorough overview of the capabilities start here
http://blog.mugunthkumar.com/products/ios-framework-introducing-mknetworkkit/
Here is some background information, otherwise skip ahead to the question in bold. I am building an app and I would like it to have access to the remote control/lock screen events. The tricky part is that this app does not play audio itself, it controls the audio of another device nearby. The communication between devices is not a problem when the app is in the foreground. As I just found out, an app does not assume control of the remote controls until it has played audio with a playback audio session, and was the last do so. This presents a problem because like I said, the app controls ANOTHER device's audio and has no need to play its own.
My first inclination is to have the app play a silent clip every time it is opened in order to assume control of the remote controls. The fact that I have to do this makes me wonder if I am even going to be allowed to do it by Apple or if there is another way to achieve this without fooling the system with fake audio clips.
QUESTION(S): Will Apple approve an app that plays a silent audio clip in order to assume control of the remote/lock screen controls for the purpose of controlling another device's audio? Is there any way of assuming control of the remote controls without an audio session?
P.S. I would prefer to have this functionality on iOS 4.0 and up.
P.P.S I have seen this similar question and it has gotten me brainstorming but the answer provided is not specific to what I need to know.
NOTE: As of iOS 7.1, you should be using MPRemoteCommandCenter instead of the answer below.
You create various system-provided subclasses of MPRemoteCommand and assign them to properties of the [MPRemoteCommandCenter sharedCommandCenter].
I'm keeping the rest of this around for historical reference, but the following is not guaranteed to work on recent iOS versions. In fact, it just might not.
You definitely do need an audio player but not necessarily an explicit session to take control of the remote control events. (AVAudioSession is implicit to any app that plays audio.) I spent a decent amount of time playing with this to confirm this.
I've seen a lot of confusion on the internet about where to set up the removeControlEventRecievedWithEvent: method and various approaches to the responder chain. I know this method works on iOS 6 and iOS 7. Other methods have not. Don't waste your time handling remote control events in the app delegate (where they used to work) or in a view controller which may go away during the lifecycle of your app.
I made a demo project to show how to do this.
Here's a quick rundown of what has to happen:
You need to create a subclass of UIApplication. When the documentation says UIResponder, it means UIApplication, since your application class is a subclass of UIResponder. In this subclass, you're going to implement the remoteControlReceivedWithEvent: and canBecomeFirstResponder methods. You want to return YES from canBecomeFirstResponder. In the remote control method, you'll probably want to notify your audio player that something's changed.
You need to tell iOS to use your custom class to run the app, instead of the default UIApplication. To do so, open main.m and change this:
return UIApplicationMain(argc, argv, nil, NSStringFromClass([RCAppDel`egate class]));
to look like this:
return UIApplicationMain(argc, argv, NSStringFromClass([RCApplication class]), NSStringFromClass([RCAppDelegate class]));
In my case RCApplication is the name of my custom class. Use the name of your subclass instead. Don't forget to #import the appropriate header.
OPTIONAL: You should configure an audio session. It's not required, but if you don't, audio won't play if the phone is muted. I do this in the demo app's delegate, but do so where appropriate.
Play something. Until you do, the remote controls will ignore your app. I just took an AVPlayer and gave it the URL of a streaming site that I expect to be up. If you find that it fails, put your own URL in there and play with it to your heart's content.
This example has a little bit more code in there to log out remote events, but it's not all that complicated. I just define and pass around some string constants.
I bet that a silent looping MP3 file would help work towards your goal.
Moshe's solution worked great for me! However one issue I noticed is when you paused the audio, the media controls would go away and you won't be able to play it again without going back into the app. If you set the Media Info on the lock screen when you play the audio then this won't happen:
NSDictionary *mediaInfo = #{MPMediaItemPropertyTitle: #"My Title",
MPMediaItemPropertyAlbumTitle: #"My Album Name",
MPMediaItemPropertyPlaybackDuration: [NSNumber numberWithFloat:0.30f]};
[[MPNowPlayingInfoCenter defaultCenter] setNowPlayingInfo:mediaInfo];
I have tried this project on both android and ios with little success. There is a good chance that this stuff is just over my head. However I figured I would post my question on here as a last effort.
I'm trying to figure out when a device is rotated or flipped. My app should know when it did a 180, 360 or if the device was flipped vertically.
In an attempt to understand the way its suppose to work I tried downloading two example projects: AccelerometerGraph and CoreMotionTeapot. With these and a mix of other stuff I have figured out I was trying this:
motionManager = [[CMMotionManager alloc] init];
motionManager.accelerometerUpdateInterval = 0.01;
motionManager.deviceMotionUpdateInterval = 0.01;
[motionManager startDeviceMotionUpdates];
if (motionManager.gyroAvailable) {
motionManager.gyroUpdateInterval = 1.0/60.0;
motionManager.deviceMotionUpdateInterval = 0.01;
[motionManager startGyroUpdatesToQueue:[NSOperationQueue currentQueue]
withHandler: ^(CMGyroData *gyroData, NSError *error)
{
CMRotationRate rotate = gyroData.rotationRate;
NSLog(#"rotation rate = [%f, %f, %f]", rotate.x, rotate.y, rotate.z);
}];
} else {
NSLog(#"No gyroscope on device.");
}
But I do not know how to gather the requested information(horizontal and vertical rotations) from these three values (x ,y, z).
What you're attempting is not trivial, but is certainly possible. This video should be very helpful in understanding the capabilities of the device and how to get closer to your goal:
http://www.youtube.com/watch?v=C7JQ7Rpwn2k
While he's talking about Android, the same concepts apply to the iPhone.
from the apple's documentation : CMMotionManager Class Reference (sorry lot of reading, i've bolded some sentences for quick over-reading)
After creating an instance of CMMotionManager, an application can use it to receive four types of motion: raw accelerometer data, raw gyroscope data, raw magnetometer data, and processed device-motion data (which includes accelerometer, rotation-rate, and attitude measurements). The processed device-motion data provided by Core Motion’s sensor fusion algorithms gives the device’s attitude, rotation rate, calibrated magnetic fields, the direction of gravity, and the acceleration the user is imparting to the device.
Important An application should create only a single instance of the CMMotionManager class. Multiple instances of this class can affect the rate at which data is received from the accelerometer and gyroscope.
An application can take one of two approaches when receiving motion data, by handling it at specified update intervals or periodically sampling the motion data. With both of these approaches, the application should call the appropriate stop method (stopAccelerometerUpdates, stopGyroUpdates, stopMagnetometerUpdates, and stopDeviceMotionUpdates) when it has finished processing accelerometer, rotation-rate, magnetometer, or device-motion data.
Handing Motion Updates at Specified Intervals
To receive motion data at specific intervals, the application calls a “start” method that takes an operation queue (instance of NSOperationQueue) and a block handler of a specific type for processing those updates. The motion data is passed into the block handler. The frequency of updates is determined by the value of an “interval” property.
Accelerometer. Set the accelerometerUpdateInterval property to specify an update interval. Call the startAccelerometerUpdatesToQueue:withHandler: method, passing in a block of type CMAccelerometerHandler. Accelerometer data is passed into the block as CMAccelerometerData objects.
Gyroscope. Set the gyroUpdateInterval property to specify an update interval. Call the startGyroUpdatesToQueue:withHandler: method, passing in a block of typeCMGyroHandler. Rotation-rate data is passed into the block as CMGyroData objects.
Magnetometer. Set the magnetometerUpdateInterval property to specify an update interval. Call the startMagnetometerUpdatesToQueue:withHandler: method, passing a block of type CMMagnetometerHandler. Magnetic-field data is passed into the block as CMMagnetometerData objects.
Device motion. Set the deviceMotionUpdateInterval property to specify an update interval. Call the or startDeviceMotionUpdatesUsingReferenceFrame:toQueue:withHandler: or startDeviceMotionUpdatesToQueue:withHandler: method, passing in a block of type CMDeviceMotionHandler. With the former method (new in iOS 5.0), you can specify a reference frame to be used for the attitude estimates. Rotation-rate data is passed into the block as CMDeviceMotion objects.
Periodic Sampling of Motion Data
To handle motion data by periodic sampling, the application calls a “start” method taking no arguments and periodically accesses the motion data held by a property for a given type of motion data. This approach is the recommended approach for applications such as games. Handling accelerometer data in a block introduces additional overhead, and most game applications are interested only the latest sample of motion data when they render a frame.
Accelerometer. Call startAccelerometerUpdates to begin updates and periodically access CMAccelerometerData objects by reading the accelerometerData property.
Gyroscope. Call startGyroUpdates to begin updates and periodically access CMGyroData objects by reading the gyroData property.
Magnetometer. Call startMagnetometerUpdates to begin updates and periodically access CMMagnetometerData objects by reading the magnetometerData property.
Device motion. Call the startDeviceMotionUpdatesUsingReferenceFrame: or startDeviceMotionUpdates method to begin updates and periodically access CMDeviceMotion objects by reading the deviceMotion property. The startDeviceMotionUpdatesUsingReferenceFrame: method (new in iOS 5.0) lets you specify a reference frame to be used for the attitude estimates.
About gathering the data :
#property(readonly) CMGyroData *gyroData
Discussion
If no gyroscope data is available, the value of this property is nil. An application that is receiving gyroscope data after calling startGyroUpdates periodically checks the value of this property and processes the gyroscope data.
So you should have something like
gyroData.rotationRate.x
gyroData.rotationRate.y
gyroData.rotationRate.z
by storing them and comparing them periodically you should be able to see if the device flipped around an axis, etc.
It all depends on the iPhone position. Say, if the phone gets flipped 360 around the y axis, the compass won't change 'cos it will still be pointing the same way during the flip. And that's not just it. My hint is that you log the accelerometer and compare the data you've collected with the movement made, and then, identify the stages of the trick and make a list of stages for each trick.
Then maybe what you're looking for is just the device orientation. You should look at the UIDevice Class Reference. In particular the
– beginGeneratingDeviceOrientationNotifications
– endGeneratingDeviceOrientationNotifications
methods.
and use it like this :
[UIDevice currentDevice].orientation
You'll get in return these possible values :
typedef enum {
UIDeviceOrientationUnknown,
UIDeviceOrientationPortrait,
UIDeviceOrientationPortraitUpsideDown,
UIDeviceOrientationLandscapeLeft,
UIDeviceOrientationLandscapeRight,
UIDeviceOrientationFaceUp,
UIDeviceOrientationFaceDown
} UIDeviceOrientation;
So you'll be able to check if it's in portrait (up or down) or landscape (left or right) and if it has been flipped.
You'll be able to implement the following methods :
- willRotateToInterfaceOrientation
- didRotateToInterfaceOrientation
You can look in this link to check how you can implement the methods.
I'm familiar with how to stream audio data from the ipod library using AVAssetReader, but I'm at a loss as to how to seek within the track. e.g. start playback at the halfway point, etc. Starting from the beginning and then sequentially getting successive samples is easy, but surely there must be a way to have random access?
AVAssetReader has a property, timeRange, which determines the time range of the asset from which media data will be read.
#property(nonatomic) CMTimeRange timeRange
The intersection of the value of this property and CMTimeRangeMake(kCMTimeZero, asset.duration) determines the time range of the asset from which media data will be read.
The default value is CMTimeRangeMake(kCMTimeZero, kCMTimePositiveInfinity). You cannot change the value of this property after reading has started.
So, if you want to seek to the middle the track, you'd create a CMTimeRange from asset.duration/2 to asset.duration, and set that as the timeRange on the AVAssetReader.
AVAssetReader is amazingly slow when seeking. If you try to recreate an AVAssetReader to seek while the user is dragging a slider, your app will bring iOS to its knees.
Instead, you should use an AVAssetReader for fast forward only access to video frames, and then also use an AVPlayerItem and AVPlayerItemVideoOutput when the user wants to seek with a slider.
It would be nice if Apple combined AVAssetReader and AVPlayerItem / AVPlayerItemVideoOutput into a new class that was performant and was able to seek quickly.
Be aware that AVPlayerItemVideoOutput will not give back pixel buffers unless there is an AVPlayer attached to the AVPlayerItem. This is obviously a strange implementation detail, but it is what it is.
If you are using AVPlayer and AVPlayerLayer, then you can simply use the seek methods on AVPlayer itself. The above details are only important if you are doing custom rendering with the pixel buffers and/or need to send the pixel buffers to an AVAssetWriter.
I just began learning about AudioQueues from the CoreAudio book (rough cuts).
I did the AudioQueue playback example tutorial, which is basically the same as the apple tutorial example. Everything is working fine.
The problems start when I try to implement the code in an app with a GUI. I tested it by pasting the code into the 'init' method of a NSObject subclass. The only way I can get the queue to do the callback is by inserting an empty DO...WHILE loop in the end of my init, but that makes the GUI freeze (obviously...)!!
Apparently the AudioQueue is supposed to be run in its own separate thread automatically as long as AudioQueueNewOutput is passed NULL for inCallbackRunLoop and CallbackRunLoopMode arguments. That's just not happening. I am only hearing the 1.5 seconds from the priming of the buffers.
Clearly, there is something fundamental that I don't understand about how things work...
Kasper
-(void) start
{
CheckError(
AudioQueueStart(queue, NULL),
"AudioQueueStart failed");
printf("Playing...\n");
do {
} while (0 == 0); //WHY IS THIS MAKING IT PLAY???
}
Turns out that my userData struct wasn't declared as a ivar in the header file. Rookie mistake...