I have an array of floats which represent time values based on when events were triggered over a period of time.
0: time stored: 1.68
1: time stored: 2.33
2: time stored: 2.47
3: time stored: 2.57
4: time stored: 2.68
5: time stored: 2.73
6: time stored: 2.83
7: time stored: 2.92
8: time stored: 2.98
9: time stored: 3.05
I would now like to start a timer and when the timer hits 1 second 687 milliseconds - the first position in the array - for an event to be triggered/method execution.
and when the timer hits 2 seconds and 337 milliseconds for a second method execution to be triggered right till the last element in the array at 3 seconds and 56 milliseconds for the last event to be triggered.
How can i mimick something like this? i need something with high accuracy
I guess what im essentially asking is how to create a metronome with high precision method calls to play the sound back on time?
…how to create a metronome with high precision method calls to play the sound back on time?
You would use the audio clock, which has all the accuracy you would typically want (the sample rate for audio playback -- e.g. 44.1kHz) - not an NSTimer.
Specifically, you can use a sampler (e.g. AudioUnit) and schedule MIDI events, or you can fill buffers with your (preloaded) click sounds' sample data in your audio streaming callback at the sample positions determined by the tempo.
To maintain 1ms or better, you will need to always base your timing off the audio clock. This is really very easy because your tempo shall dictate an interval of frames.
The tough part (for most people) is getting used to working in realtime contexts and using the audio frameworks, if you have not worked in that domain previously.
Look into dispatch_after(). You'd create a target time for it using something like dispatch_time(DISPATCH_TIME_NOW, 1.687000 * NSEC_PER_SEC).
Update: if you only want to play sounds at specific times, rather than do arbitrary work, then you should use an audio API that allows you to schedule playback at specific times. I'm most familiar with the Audio Queue API. You would create a queue and create 2 or 3 buffers. (2 if the audio is always the same. 3 if you dynamically load or compute it.) Then, you'd use AudioQueueEnqueueBufferWithParameters() to queue each buffer with a specific start time. The audio queue will then take care of playing as close as possible to that requested start time. I doubt you're going to beat the precision of that by manually coding an alternative. As the queue returns processed buffers to you, you refill it if necessary and queue it at the next time.
I'm sure that AVFoundation must have a similar facility for scheduling playback at specific time, but I'm not familiar with it.
To get high precision timing you'd have to jump down a programming level or two and utilise something like the Core Audio Unit framework, which offers sample-accurate timing (at 44100kHz, samples should occur around every 0.02ms).
The drawback to this approach is that to get such timing performance, Core Audio Unit programming eschews Objective-C for a C/C++ approach, which is (in my opinion) tougher to code than Objective-C. The way Core Audio Units work is also quite confusing on top of that, especially if you don't have a background in audio DSP.
Staying in Objective-C, you probably know that NSTimers are not an option here. Why not check out the AVFoundation framework? It can be used for precise media sequencing, and with a bit of creative sideways thinking, and the AVURLAssetPreferPreciseDurationAndTimingKey option of AVURLAsset, you might be able to achieve what you want without using Core Audio Units.
Just to fill out more about AVFoundation, you can place instances of AVAsset into an AVMutableComposition (via AVMutableCompositionTrack objects), and then use AVPlayerItem objects with an AVPlayer instance to control the result. The AVPlayerItem notification AVPlayerItemDidPlayToEndTimeNotification (docs) can be used to determine when individual assets finish, and the AVPlayer methods addBoundaryTimeObserverForTimes:queue:usingBlock: and addPeriodicTimeObserverForInterval:queue:usingBlock: can provide notifications at arbitrary times.
With iOS, if your app will be playing audio, you can also get this all to run on the background thread, meaning you can keep time whilst your app is in the background (though a warning, if it does not play audio, Apple might not accept your app using this background mode). Check out UIBackgroundModes docs for more info.
Related
I'm trying to output isochronous data (generated programmatically) over High Speed USB 2 with very low latency. Ideally around 1-2 ms. On Windows I'm using WinUsb, and on OSX I'm using IOKit.
There are two approaches I have thought of. I'm wondering which is best.
1-frame transfers
WinUsb is quite restrictive in what it allows, and requires each isochronous transfer to be a whole number of frames (1 frame = 1 ms). Therefore to minimise latency use transfers of one frame each in a loop something like this:
for (;;)
{
// Submit a 1-frame transfer ASAP.
WinUsb_WriteIsochPipeAsap(..., &overlapped[i]);
// Wait for the transfer from 2 frames ago to complete, for timing purposes. This
// keeps the loop in sync with the USB frames.
WinUsb_GetOverlappedResult(..., &overlapped[i-2], block=true);
}
This works fairly well and gives a latency of 2 ms. On OSX I can do a similar thing, though it is quite a bit more complicated. This is the gist of the code - the full code is too long to post here:
uint64_t frame = ...->GetBusFrameNumber(...) + 1;
for (;;)
{
// Submit at the next available frame.
for (a few attempts)
{
kr = ...->LowLatencyWriteIsochPipeAsync(...
frame, // Start on this frame.
&transfer[i]); // Callback
if (kr == kIOReturnIsoTooOld)
frame++; // Try the next frame.
else if (kr == kIOReturnSuccess)
break;
else
abort();
}
// Above, I pass a callback with a reference to a condition_variable. When
// the transfer completes the condition_variable is triggered and wakes this up:
transfer[i-5].waitForResult();
// I have to wait for 5 frames ago on OSX, otherwise it skips frames.
}
Again this kind of works and gives a latency of around 3.5 ms. But it's not super-reliable.
Race the kernel
OSX's low latency isochronous functions allow you to submit long transfers (e.g. 64 frames), and then regularly (max once per millisecond) update the frame list which says where the kernel has got to in reading the write buffer.
I think the idea is that you somehow wake up every N milliseconds (or microseconds), read the frame list, work out where you need to write to and do that. I haven't written code for this yet but I'm not entirely sure how to proceed, and there are no examples I can find.
It doesn't seem to provide a callback when the frame list is updated so I suppose you have to use your own timer - CFRunLoopTimerCreate() and read the frame list from that callback?
Also I'm wondering if WinUsb allows a similar thing, because it also forces you to register a buffer so it can be simultaneously accessed by the kernel and user-space. I can't find any examples that explicitly say you can write to the buffer while the kernel is reading it though. Are you meant to use WinUsb_GetCurrentFrameNumber in a regular callback to work out where the kernel has got to in a transfer?
That would require getting a regular callback on Windows, which seems a bit tricky. The only way I've seen is to use multimedia timers which have a minimum period of 1 millisecond (unless you use the undocumented (NtSetTimerResolution?).
So my question is: Can I improve the "1-frame transfers" approach, or should I switch to a 1 kHz callback that tries to race the kernel. Example code very appreciated!
(Too long for a comment, so…)
I can only address the OS X side of things. This part of the question:
I think the idea is that you somehow wake up every N milliseconds (or
microseconds), read the frame list, work out where you need to write
to and do that. I haven't written code for this yet but I'm not
entirely sure how to proceed, and there are no examples I can find.
It doesn't seem to provide a callback when the frame list is updated
so I suppose you have to use your own timer - CFRunLoopTimerCreate()
and read the frame list from that callback?
Has me scratching my head over what you're trying to do. Where is your data coming from, where latency is critical but the data source does not already notify you when data is ready?
The idea is that your data is being streamed from some source, and as soon as any data becomes available, presumably when some completion for that data source gets called, you write all available data into the user/kernel shared data buffer at the appropriate location.
So maybe you could explain in a little more detail what you're trying to do and I might be able to help.
The wife asked for a device to make the xmas lights 'rock' with the best of music. I am going to use an Arduino micro-controller to control relays hooked up to the lights, sending down 6 signals from C# winforms to turn them off and on. I want to use NAduio to separate the amplitude and rhythm to send the six signals. For a specific range of hertz like an equalizer with six bars for the six signals, then the timing from the rhythm. I have seen the WPF demo, and the waveform seems like the answer. I want to know how to get those values real time while the song is playing.
I'm thinking ...
1. Create a simple mp3 player and load all my songs.
2. Start the songs playing.
3. Sample the current dynamics of the song and put that into an integer that I can send to which channel on the Arduino micro-controller via usb.
I'm not sure how to capture real time the current sound information and give integer values for that moment. I can read the e.MaxSampleValues[0] values real time while the song is playing, but I want to be able to distinguish what frequency range is active at that moment.
Any help or direction would be appreciated for this interesting project.
Thank you
Sounds like a fun signal processing project.
Using the NAudio.Wave.WasapiLoopbackCapture object you can get the audio data being produced from the sound card on the local computer. This lets you skip the 'create an MP3 player' step, although at the cost of a slight delay between sound and lights. To get better synchronization you can do the MP3 decoding and pre-calculate the beat patterns and output states during playback. This will let you adjust the delay between sending the outputs and playing the audio block those outputs were generated from, getting near perfect synchronization between lights and music.
Once you have the samples, the next step is to use an FFT to find the frequency components. Fortunately NAudio includes a class to help with this: NAudio.Dsp.FastFourierTransform. (Thank you Mark!) Take the output of the FFT() function and sum out the frequency ranges you want for each controlled light.
The next step is Beat Detection. There's an interesting article on this here. The main difference is that instead of doing energy detection on a stream of sample blocks you'll be using the data from your spectral analysis stage to feed the beat detection algorithm. Those ranges you summed become inputs into individual beat detection processors, giving you one output for each frequency range you defined. You might want to add individual scaling/threshold factors for each frequency group, with some sort of on-screen controls to adjust these for best effect.
At the end of the process you will have a stream of sample blocks, each with a set of output flags. Push the flags out to your Arduino and queue the samples to play, with a delay on either of those operations to achieve your synchronization.
I'm working on an iOS app that uses an NSTimer for a countdown. This is prone to user tampering: if, for example, the user switches out of the app, closes the app manually, changes the device clock, and comes back in, the timer will have to be recreated. Another scenario: the user locks the device, it goes into low-power mode (which requires timers to be recreated), and the clock auto-sets before the game is opened again. If that happens, I won't have an accurate way of determining how much time has passed since the app was closed, since the device clock has changed.
Tl;dr: countdown timers sometimes have to be recreated after a device clock change. How is this problem usually handled?
Any time you're relying on the system clock for accurate timing you're going to have troubles, even if the user isn't deliberately tampering with the clock. Typically clock drift is corrected by slightly increasing or decreasing the length of a second to allow the clock to drift back into alignment over a period of minutes. If you need accurate timing, you can either use something like mach_absolute_time() which is related to the system uptime rather than the system clock, or you can use Grand Central Dispatch. The dispatch_after() function takes a dispatch_time_t which can either be expressed using wall time (e.g. system clock) or as an offset against DISPATCH_TIME_NOW (which ignores wall clock).
For future reference, in regard to different systems of timekeeping in OSX (and consequently iOS):
One way to measure the speed of any operation, including launch times,
is to use system routines to get the current time at the beginning and
end of the operation. Once you have the two time values, you can take
the difference and log the results.
The advantage of this technique is that it lets you measure the
duration of specific blocks of code. Mac OS X includes several
different ways to get the current time:
mach_absolute_time reads the CPU time base register and is the
basis for other time measurement functions.
The Core Services UpTime function provides nanosecond resolution
for time measurements.
The BSD gettimeofday function (declared in <sys/time.h>) provides
microsecond resolution. (Note, this function incurs some overhead but
is still accurate for most uses.)
In Cocoa, you can create an NSDate object with the current time at
the beginning of the operation and then use the
timeIntervalSinceDate: method to get the time difference.
Source: Launch Time Performance Guidelines, "Gathering Launch Time Metrics".
I'm using AV Foundation to play an MP3 file loaded over the network, with code that is almost identical to the playback example here: Putting it all Together: Playing a Video File Using AVPlayerLayer, except without attaching a layer for video playback. I was trying to make my app respond to the playback buffer becoming empty on a slow network connection. To do this, I planned to use key-value observing on the AVPlayerItem's playbackBufferEmpty property, but the documentation did not say whether that was possible. I thought it might be possible because the status property can be observed (and is the example above) even though the documentation doesn't say that.
So, in an attempt to create conditions where the buffer would empty, I added code on the server to sleep for two seconds after serving up each 8k chunk of the MP3 file. Much to my surprise, this caused my app's UI (updated using NSTimer) to freeze completely for long periods, despite the fact that it shows almost no CPU usage in the profiler. I tried loading the tracks on another queue with dispatch_async, but that didn't help at all.
Even without the sleep on the server, I've noticed that loading streams using AVPlayerItem keeps the UI from updating for the short time that the stream is being downloaded. I can't see why a slow file download should ever block the responsiveness of the UI. Any idea why this is happening or what I can do about it?
Okay, problem solved. It looks like passing AVURLAssetPreferPreciseDurationAndTimingKey in the options to URLAssetWithURL:options: causes the slowdown. This also only happens when the AVURLAsset's duration property or some other property relating to the stream's timing is accessed from the selector fired by the NSTimer. So if you can avoid polling for timing information, this problem may not affect you, but that's not an option for me. If precise timing is not requested, there's still a delay of around 0.75 seconds to 1 second, but that's all.
Looking back through it, the documentation does warn that precise timing might cause slower performance, but I never imagined 10+ second delays. Why the delay should scale with the loading time of the media is beyond me; it seems like it should only scale with the size of the file. Maybe iOS is doing some kind of heavy polling for new data and/or processing the same bytes over and over.
So now, without "precise timing and duration," the duration of the asset is permanently at 0.0, even when it's fully loaded. I can also answer my original goal of doing KVO on AVPlayerItem.isPlaybackBufferEmpty. It seems KVO would be useless anyway, since the property starts out being NO, changes to YES as soon as I start playback, and continues to be YES even as the media is playing for minutes at a time. The documentation says this about the property:
Indicates whether playback has consumed all buffered media and that playback will stall or end.
So I guess that's not accurate, and, at least in this particular case, the property is not very useful.
I am running a countdown timer which updates time on a label.i want to play a tick sound every second.i have the sound file but it is not sync perfectly with time.how do i sync time with audio?also if i use uiimagepicker the sound stops how to manage this? if someone have tick sound like a clock has it would be great.
The best way to sync up your sound and time would be to actually play short - less than a second long - sound files (tick sound) once per second as TSTimer fires. It won't sound as nice as a real clock or chronometer ticking, but it would be easy to do. And if the sounds are that small, then you don't have to worry too much about latency. I think to be realistic you need to play two ticks per second, with the first and second ticks about 0.3 seconds apart, and the next one starting at the next second, with the fourth, again only about 0.3 seconds later. And so on.
For even tighter integration of sounds and GUI, you should read up on Audio Toolbox:
Use the Audio Toolbox framework to play audio with synchronization capabilities, access packets of incoming audio, parse audio streams, convert audio formats, and record audio with access to individual packets. For details, see Audio Toolbox Framework Reference and the SpeakHere sample code project.