I am running a countdown timer which updates time on a label.i want to play a tick sound every second.i have the sound file but it is not sync perfectly with time.how do i sync time with audio?also if i use uiimagepicker the sound stops how to manage this? if someone have tick sound like a clock has it would be great.
The best way to sync up your sound and time would be to actually play short - less than a second long - sound files (tick sound) once per second as TSTimer fires. It won't sound as nice as a real clock or chronometer ticking, but it would be easy to do. And if the sounds are that small, then you don't have to worry too much about latency. I think to be realistic you need to play two ticks per second, with the first and second ticks about 0.3 seconds apart, and the next one starting at the next second, with the fourth, again only about 0.3 seconds later. And so on.
For even tighter integration of sounds and GUI, you should read up on Audio Toolbox:
Use the Audio Toolbox framework to play audio with synchronization capabilities, access packets of incoming audio, parse audio streams, convert audio formats, and record audio with access to individual packets. For details, see Audio Toolbox Framework Reference and the SpeakHere sample code project.
Related
In my app users record themselves singing over a backing track, and then later playback the recorded audio and this backing track at the same time. I use expo-av for my audio system. The problem is that at the playback stage the audio is often out of sync because expo only really supports asynchronous audio. Does anyone have any advice on how to approach this problem at a high level?
A few of my ideas:
Mix the two audio files into a single file for playback. This almost works except for the fact that the recording and backing track are also out of sync. If I knew exactly how much they were offset, I could just add that amount of silence to one of the files when mixing. However, I haven't found a way to accurately calculate this offset.
Reduce time it takes for recording and playback to start, so that the latency is not noticeable. Some things I've found that help here are recording at lower quality and using smaller audio files. Any other tips here would be appreciated.
Use a different audio library than expo-av. Is there one that comes to mind that better supports synchronous audio? Ideally it would also be supported by Expo or at least React Native.
I have a heart-rate monitor, and a short wav file which sounds like a heart beating. This wav file has a duration of 1 second.
I want to play it in time with the user's heart rate. So if their heart rate is 100bpm, the sound should play for 0.6 seconds. If their heart rate is 60bpm, the sound should play for 1.0 seconds. And if their heart rate is 30bpm they are probably about to die.
Is it possible to alter the duration of time it takes to play a sound file? And what framework should I use? AVPlayer?
check this one: How can I use AVAudioPlayer to play audio faster *and* higher pitched?
It says you can change the .rate property of an AVAudioPlayer to play it faster.
ps: it won't change the pitch of the sound. If you also need this, check the question I mentioned.
I am trying to record kinect files in .oni format, that I will later try to synchronize with other sensors. As such, it is very important that I get consistent fps, even if some frames are repeats.
From what I can see now, WaitAndUpdateAll does not guarantee that the frame rate is consistent. I will be recording for several minutes (20+), so I need to make sure there is no drift!
Does anyone know if it's possible to lock down the fps of the recording, and if not, how stable the recording fps of the kinect is? Thanks!
After some investigation of this issue, I put together the following write up on the topic:
http://denislantsman.com/?p=50
Putting it here so interested people can find it and not have to wrestle with this issue.
My guess would be to go with the PCL libary since the developers also work together with the ROS team where they also have to sync sensors a lot. But be warned I wasn't able capture XYZRGB clouds at 30 FPS on windows 7. If you only need XYZ to be captured you should be fine. Worst case you have to time stamp and sync all your data by yourself.
I have an array of floats which represent time values based on when events were triggered over a period of time.
0: time stored: 1.68
1: time stored: 2.33
2: time stored: 2.47
3: time stored: 2.57
4: time stored: 2.68
5: time stored: 2.73
6: time stored: 2.83
7: time stored: 2.92
8: time stored: 2.98
9: time stored: 3.05
I would now like to start a timer and when the timer hits 1 second 687 milliseconds - the first position in the array - for an event to be triggered/method execution.
and when the timer hits 2 seconds and 337 milliseconds for a second method execution to be triggered right till the last element in the array at 3 seconds and 56 milliseconds for the last event to be triggered.
How can i mimick something like this? i need something with high accuracy
I guess what im essentially asking is how to create a metronome with high precision method calls to play the sound back on time?
…how to create a metronome with high precision method calls to play the sound back on time?
You would use the audio clock, which has all the accuracy you would typically want (the sample rate for audio playback -- e.g. 44.1kHz) - not an NSTimer.
Specifically, you can use a sampler (e.g. AudioUnit) and schedule MIDI events, or you can fill buffers with your (preloaded) click sounds' sample data in your audio streaming callback at the sample positions determined by the tempo.
To maintain 1ms or better, you will need to always base your timing off the audio clock. This is really very easy because your tempo shall dictate an interval of frames.
The tough part (for most people) is getting used to working in realtime contexts and using the audio frameworks, if you have not worked in that domain previously.
Look into dispatch_after(). You'd create a target time for it using something like dispatch_time(DISPATCH_TIME_NOW, 1.687000 * NSEC_PER_SEC).
Update: if you only want to play sounds at specific times, rather than do arbitrary work, then you should use an audio API that allows you to schedule playback at specific times. I'm most familiar with the Audio Queue API. You would create a queue and create 2 or 3 buffers. (2 if the audio is always the same. 3 if you dynamically load or compute it.) Then, you'd use AudioQueueEnqueueBufferWithParameters() to queue each buffer with a specific start time. The audio queue will then take care of playing as close as possible to that requested start time. I doubt you're going to beat the precision of that by manually coding an alternative. As the queue returns processed buffers to you, you refill it if necessary and queue it at the next time.
I'm sure that AVFoundation must have a similar facility for scheduling playback at specific time, but I'm not familiar with it.
To get high precision timing you'd have to jump down a programming level or two and utilise something like the Core Audio Unit framework, which offers sample-accurate timing (at 44100kHz, samples should occur around every 0.02ms).
The drawback to this approach is that to get such timing performance, Core Audio Unit programming eschews Objective-C for a C/C++ approach, which is (in my opinion) tougher to code than Objective-C. The way Core Audio Units work is also quite confusing on top of that, especially if you don't have a background in audio DSP.
Staying in Objective-C, you probably know that NSTimers are not an option here. Why not check out the AVFoundation framework? It can be used for precise media sequencing, and with a bit of creative sideways thinking, and the AVURLAssetPreferPreciseDurationAndTimingKey option of AVURLAsset, you might be able to achieve what you want without using Core Audio Units.
Just to fill out more about AVFoundation, you can place instances of AVAsset into an AVMutableComposition (via AVMutableCompositionTrack objects), and then use AVPlayerItem objects with an AVPlayer instance to control the result. The AVPlayerItem notification AVPlayerItemDidPlayToEndTimeNotification (docs) can be used to determine when individual assets finish, and the AVPlayer methods addBoundaryTimeObserverForTimes:queue:usingBlock: and addPeriodicTimeObserverForInterval:queue:usingBlock: can provide notifications at arbitrary times.
With iOS, if your app will be playing audio, you can also get this all to run on the background thread, meaning you can keep time whilst your app is in the background (though a warning, if it does not play audio, Apple might not accept your app using this background mode). Check out UIBackgroundModes docs for more info.
I am making an application that will do things like pitch shifting and time stretching to audio files, and play them back in real time. Is OpenAL the right library for this? Or is there something that could do this better, and would be easy to reuse for different platforms?
OpenAL can't do pitch shifting or time stretching. For that, you'll need a 3rd party library such as SoundTouch.
As well, OpenAL doesn't support realtime audio processing. You can kind of fake it using buffer queues, but it's a bit hokey because you'd need to keep polling to see when a buffer has finished playing and then queue the next processed buffer, and you'd need to keep your buffers very small or risk laggy audio response. However, small queued buffers can potentially lead to performance, timing, and clicking issues.