How to seek within an audio track using avassetreader? - objective-c

I'm familiar with how to stream audio data from the ipod library using AVAssetReader, but I'm at a loss as to how to seek within the track. e.g. start playback at the halfway point, etc. Starting from the beginning and then sequentially getting successive samples is easy, but surely there must be a way to have random access?

AVAssetReader has a property, timeRange, which determines the time range of the asset from which media data will be read.
#property(nonatomic) CMTimeRange timeRange
The intersection of the value of this property and CMTimeRangeMake(kCMTimeZero, asset.duration) determines the time range of the asset from which media data will be read.
The default value is CMTimeRangeMake(kCMTimeZero, kCMTimePositiveInfinity). You cannot change the value of this property after reading has started.
So, if you want to seek to the middle the track, you'd create a CMTimeRange from asset.duration/2 to asset.duration, and set that as the timeRange on the AVAssetReader.

AVAssetReader is amazingly slow when seeking. If you try to recreate an AVAssetReader to seek while the user is dragging a slider, your app will bring iOS to its knees.
Instead, you should use an AVAssetReader for fast forward only access to video frames, and then also use an AVPlayerItem and AVPlayerItemVideoOutput when the user wants to seek with a slider.
It would be nice if Apple combined AVAssetReader and AVPlayerItem / AVPlayerItemVideoOutput into a new class that was performant and was able to seek quickly.
Be aware that AVPlayerItemVideoOutput will not give back pixel buffers unless there is an AVPlayer attached to the AVPlayerItem. This is obviously a strange implementation detail, but it is what it is.
If you are using AVPlayer and AVPlayerLayer, then you can simply use the seek methods on AVPlayer itself. The above details are only important if you are doing custom rendering with the pixel buffers and/or need to send the pixel buffers to an AVAssetWriter.

Related

Efficient Cocoa Animation w/raw bitmap data

I have a raw bitmap image of RGBA malloc-ed data; rows are obviously a multiple of 4 bytes. This data actually originates from an AVI (24-bit BGR format), but I convert it to 32-bit ARGB. There's about 8mb of 32-bit data (1920x1080) per frame.
For each frame:
I convert that frame's data into a NSData object via NSData:initWithBytes:length.
I then convert that into a CIImage object via CIImage:imageWithBitmapData:bytesPerRow:size:format:colorSpace.
From that CIImage, I draw it into my final NSOpenGLView context using NSOpenGLView:drawImage:inRect:fromRect. Due to the "mosaic" nature of the target images, there are approximately 15-20 calls made on this with various source/destination Rects.
Using a 30hz NSTimer that calls [self setNeedsDisplay:YES] on the NSOpenGLView, I can attain about 20-25fps on a 2012 MacMini/2.6ghz/i7 -- it's not rock solid at 30hz. This to be expected with an NSTimer instead of a CVDisplayLink.
But... ignoring the NSTimer issue for now, are there any suggestions/pointers on making this frame-by-frame rendering a little more efficient?
Thanks!
NB: I would like to stick with CIImage objects as I'll want to access transition effects at some point.
Every frame, the call to NSData's initWithBytes:length: causes an 8MB memory allocation & an 8MB copy.
You can get rid of this per-frame allocation/copy by replacing theNSData object with a persistent NSMutableData object (set up once at the beginning), and using its mutableBytes as the destination buffer for the frame's 24- to 32-bit conversion.
(Alternatively, if you prefer to manage the destination-buffer memory yourself, leave the object as NSData class, but initialize it with initWithBytesNoCopy:length:freeWhenDone: & pass NO as the last parameter.)

multi track mp3 playback for iOS application

I am doing an application that involves playing back a song in a multi track format (drums, vocals, guitar, piano, etc...). I don't need to do any fancy audio processing to each track, all I need to be able to do is play, pause, and mute/unmute each track.
I had been using multiple instances of AVAudioPlayer but when performing device testing, I noticed that the tracks are playing very slightly out of sync when they are first played. Furthermore, when I pause and play the tracks they continue to get more out of sync. After a bit of research I've realized that AVAudioplayer just has too much latency and won't work for my application.
In my application I basically had an NSArray of AVAudioPlayers that I would loop through and play each one or pause/stop each one, I'm sure this is what caused it to get out of sync on the device.
It seemed like apple's audio mixer would work well for me, but when I try implementing it I get a EXC_BAD_ACCESS error that I can't figure out.
I know the answer is to use OpenAL or audio units but It just seems unnecessary to spend weeks learning about these when all I need to do is play around 5 .mp3 tracks at the same time. Does anyone have any suggestions on how to accomplish this? Thanks
thanks to admsyn's suggestion I was able to come up with a solution.
AVAudioPlayer has a currentTime property that returns the current time of a track and can also be set.
So I implemented the startSynchronizedPlayback as stated by admsyn and then added the following when I stopped the tracks:
-(void) stopAll
{
int count = [tracksArr count];
for(int i = 0; i < count; i++)
{
trackModel = [tracksArr objectAtIndex:i]
if(i = 0)
{
currentTime = [trackModel currentTime]
}
[trackModel stop]
[trackModel setCurrentTime:currentTime]
}
{
This code basically loops through my array of tracks which each hold their own AVAudioPlayer, grabs the current time from the first track, then sets all of the following tracks to that time. Now when I use the startSynchronizedPlayback method they all play in sync, and pausing unpausing keeps them in sync as well. Hope this is helpful to someone else trying to keep tracks in sync.
If you're issuing individual play messages to each AVAudioPlayer, it is entirely likely that the messages are arriving at different times, or that the AVAudioPlayers finish their warm up phase out of sync with each other. You should be using playAtTime: and the deviceCurrentTime property to achieve proper synchronization. Note the description of deviceCurrentTime:
Use this property to indicate “now” when calling the playAtTime: instance method. By configuring multiple audio players to play at a specified offset from deviceCurrentTime, you can perform precise synchronization—as described in the discussion for that method.
Also note the example code in the playAtTime: discussion:
// Before calling this method, instantiate two AVAudioPlayer objects and
// assign each of them a sound.
- (void) startSynchronizedPlayback {
NSTimeInterval shortStartDelay = 0.01; // seconds
NSTimeInterval now = player.deviceCurrentTime;
[player playAtTime: now + shortStartDelay];
[secondPlayer playAtTime: now + shortStartDelay];
// Here, update state and user interface for each player, as appropriate
}
If you are able to decode the files to disk, then audio units are probably the solution which would provide the best latency. If you decide to use such an architecture, you should also check out Novocaine:
https://github.com/alexbw/novocaine
That framework takes a lot of the headache out of dealing with audio units.

Get Volume of AVPlayer in iOS

There are plenty of questions asking how to set the volume of an AVPlayer, but how do you get the current volume of the player in iOS?
For example, I am trying to fade a song out from its current level. I could save the volume elsewhere and refer to it, but would rather read the value directly from the AVPlayer.
AVPlayer is contains one or more AVPlayerItem objects, and it is through these objects that you can get and set audio levels for media played by an AVPlayer. Head to the AVPlayerItem docs and look at the audioMix property, and also check out my answer to a slightly different question that should still provide some info.
Following up after your comment, this is (I think) how you would get the volume values from the - (BOOL)getVolumeRampForTime:(CMTime)time startVolume:(float *)startVolume endVolume:(float *)endVolume timeRange:(CMTimeRange *)timeRange method:
// Get your AVAudioMixInputParameters instance, here called audioMixInputParameters
// currentTime is the current playhead time of your media
float startVolume;
float endVolume;
CMTimeRange timeRange;
bool success = [audioMixInputParameters getVolumeRampForTime: currentTime
startVolume: &startVolume
endVolume: &endVolume
timeRange: &timeRange];
// startVolume and endVolume should now be set
NSLog(#"Start volume: %f | End volume: %f", startVolume, endVolume);
According to Apple's AVPlayer documentation for OS X, it lists a volume property, but the documentation for the same class in iOS doesn't show one listed. Would your project allow you to use AVAudioPlayer instead? That one does have a synthesized volume property for iOS that's much more easily set/retrieved.
You could use the volume property of the AVPlayer class. Here's the AVPlayer class reference link. Quoting from it:
volume
Indicates the current audio volume of the player.
#property(nonatomic) float volume
Discussion
0.0 means “silence all audio,” 1.0 means “play at the full volume of the current item.”
Availability
Available in OS X v10.7 and later.
Declared In
AVPlayer.h
edit:
You could try geting the system volume instead. This link provides 2 ways.

Frame synchronization with AVPlayer

I'm having an issue syncing external content in a CALayer with an AVPlayer at high precision.
My first thought was to lay out an array of frames (equal to the number of frames in the video) within a CAKeyframeAnimation and sync with an AVSynchronizedLayer. However, upon stepping through the video frame-by-frame, it appears that AVPlayer and Core Animation redraw on different cycles, as there is a slight (but noticeable) delay between them before they sync up.
Short of processing and displaying through Core Video, is there a way to accurately sync with an AVPlayer on the frame level?
Update: February 5, 2012
So far the best way I've found to do this is to pre-render through AVAssetExportSession coupled with AVVideoCompositionCoreAnimationTool and a CAKeyFrameAnimation.
I'm still very interested in learning of any real-time ways to do this, however.
What do you mean by 'high precision?'
Although the docs claim that an AVAssetReader is not designed for real-time usage, in practice I have had no problems reading video in real-time using it (cf https://stackoverflow.com/a/4216161/42961). The returned frames come with a 'Presentation timestamp' which you can fetch using CMSampleBufferGetPresentationTimeStamp.
You'll want one part of the project to be the 'master' timekeeper here. Assuming your CALayer animation is quick to compute and doesn't involve potentially blocky things like disk access, I'd use that as the master time source. When you need to draw content (eg in the draw selector on your UIView subclass) you should read currentTime from the CALayer animation, if necessary proceed through the AVAssetReader's video frames using copyNextSampleBuffer until CMSampleBufferGetPresentationTimeStamp returns >= currentTime, draw the frame, and then draw the CALayer animation content over the top.
If your player is using an AVURLAsset, did you load it with the precise duration flag set? I.e. something like:
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
AVURLAsset *urlAsset = [AVURLAsset URLAssetWithURL:aUrl options:options];

motion sensor, reading rotations

I have tried this project on both android and ios with little success. There is a good chance that this stuff is just over my head. However I figured I would post my question on here as a last effort.
I'm trying to figure out when a device is rotated or flipped. My app should know when it did a 180, 360 or if the device was flipped vertically.
In an attempt to understand the way its suppose to work I tried downloading two example projects: AccelerometerGraph and CoreMotionTeapot. With these and a mix of other stuff I have figured out I was trying this:
motionManager = [[CMMotionManager alloc] init];
motionManager.accelerometerUpdateInterval = 0.01;
motionManager.deviceMotionUpdateInterval = 0.01;
[motionManager startDeviceMotionUpdates];
if (motionManager.gyroAvailable) {
motionManager.gyroUpdateInterval = 1.0/60.0;
motionManager.deviceMotionUpdateInterval = 0.01;
[motionManager startGyroUpdatesToQueue:[NSOperationQueue currentQueue]
withHandler: ^(CMGyroData *gyroData, NSError *error)
{
CMRotationRate rotate = gyroData.rotationRate;
NSLog(#"rotation rate = [%f, %f, %f]", rotate.x, rotate.y, rotate.z);
}];
} else {
NSLog(#"No gyroscope on device.");
}
But I do not know how to gather the requested information(horizontal and vertical rotations) from these three values (x ,y, z).
What you're attempting is not trivial, but is certainly possible. This video should be very helpful in understanding the capabilities of the device and how to get closer to your goal:
http://www.youtube.com/watch?v=C7JQ7Rpwn2k
While he's talking about Android, the same concepts apply to the iPhone.
from the apple's documentation : CMMotionManager Class Reference (sorry lot of reading, i've bolded some sentences for quick over-reading)
After creating an instance of CMMotionManager, an application can use it to receive four types of motion: raw accelerometer data, raw gyroscope data, raw magnetometer data, and processed device-motion data (which includes accelerometer, rotation-rate, and attitude measurements). The processed device-motion data provided by Core Motion’s sensor fusion algorithms gives the device’s attitude, rotation rate, calibrated magnetic fields, the direction of gravity, and the acceleration the user is imparting to the device.
Important An application should create only a single instance of the CMMotionManager class. Multiple instances of this class can affect the rate at which data is received from the accelerometer and gyroscope.
An application can take one of two approaches when receiving motion data, by handling it at specified update intervals or periodically sampling the motion data. With both of these approaches, the application should call the appropriate stop method (stopAccelerometerUpdates, stopGyroUpdates, stopMagnetometerUpdates, and stopDeviceMotionUpdates) when it has finished processing accelerometer, rotation-rate, magnetometer, or device-motion data.
Handing Motion Updates at Specified Intervals
To receive motion data at specific intervals, the application calls a “start” method that takes an operation queue (instance of NSOperationQueue) and a block handler of a specific type for processing those updates. The motion data is passed into the block handler. The frequency of updates is determined by the value of an “interval” property.
Accelerometer. Set the accelerometerUpdateInterval property to specify an update interval. Call the startAccelerometerUpdatesToQueue:withHandler: method, passing in a block of type CMAccelerometerHandler. Accelerometer data is passed into the block as CMAccelerometerData objects.
Gyroscope. Set the gyroUpdateInterval property to specify an update interval. Call the startGyroUpdatesToQueue:withHandler: method, passing in a block of typeCMGyroHandler. Rotation-rate data is passed into the block as CMGyroData objects.
Magnetometer. Set the magnetometerUpdateInterval property to specify an update interval. Call the startMagnetometerUpdatesToQueue:withHandler: method, passing a block of type CMMagnetometerHandler. Magnetic-field data is passed into the block as CMMagnetometerData objects.
Device motion. Set the deviceMotionUpdateInterval property to specify an update interval. Call the or startDeviceMotionUpdatesUsingReferenceFrame:toQueue:withHandler: or startDeviceMotionUpdatesToQueue:withHandler: method, passing in a block of type CMDeviceMotionHandler. With the former method (new in iOS 5.0), you can specify a reference frame to be used for the attitude estimates. Rotation-rate data is passed into the block as CMDeviceMotion objects.
Periodic Sampling of Motion Data
To handle motion data by periodic sampling, the application calls a “start” method taking no arguments and periodically accesses the motion data held by a property for a given type of motion data. This approach is the recommended approach for applications such as games. Handling accelerometer data in a block introduces additional overhead, and most game applications are interested only the latest sample of motion data when they render a frame.
Accelerometer. Call startAccelerometerUpdates to begin updates and periodically access CMAccelerometerData objects by reading the accelerometerData property.
Gyroscope. Call startGyroUpdates to begin updates and periodically access CMGyroData objects by reading the gyroData property.
Magnetometer. Call startMagnetometerUpdates to begin updates and periodically access CMMagnetometerData objects by reading the magnetometerData property.
Device motion. Call the startDeviceMotionUpdatesUsingReferenceFrame: or startDeviceMotionUpdates method to begin updates and periodically access CMDeviceMotion objects by reading the deviceMotion property. The startDeviceMotionUpdatesUsingReferenceFrame: method (new in iOS 5.0) lets you specify a reference frame to be used for the attitude estimates.
About gathering the data :
#property(readonly) CMGyroData *gyroData
Discussion
If no gyroscope data is available, the value of this property is nil. An application that is receiving gyroscope data after calling startGyroUpdates periodically checks the value of this property and processes the gyroscope data.
So you should have something like
gyroData.rotationRate.x
gyroData.rotationRate.y
gyroData.rotationRate.z
by storing them and comparing them periodically you should be able to see if the device flipped around an axis, etc.
It all depends on the iPhone position. Say, if the phone gets flipped 360 around the y axis, the compass won't change 'cos it will still be pointing the same way during the flip. And that's not just it. My hint is that you log the accelerometer and compare the data you've collected with the movement made, and then, identify the stages of the trick and make a list of stages for each trick.
Then maybe what you're looking for is just the device orientation. You should look at the UIDevice Class Reference. In particular the
– beginGeneratingDeviceOrientationNotifications
– endGeneratingDeviceOrientationNotifications
methods.
and use it like this :
[UIDevice currentDevice].orientation
You'll get in return these possible values :
typedef enum {
UIDeviceOrientationUnknown,
UIDeviceOrientationPortrait,
UIDeviceOrientationPortraitUpsideDown,
UIDeviceOrientationLandscapeLeft,
UIDeviceOrientationLandscapeRight,
UIDeviceOrientationFaceUp,
UIDeviceOrientationFaceDown
} UIDeviceOrientation;
So you'll be able to check if it's in portrait (up or down) or landscape (left or right) and if it has been flipped.
You'll be able to implement the following methods :
- willRotateToInterfaceOrientation
- didRotateToInterfaceOrientation
You can look in this link to check how you can implement the methods.