How to continuously get updates for AudioQueue without a timer - objective-c

I know you can use the AudioQueueGetCurrentTime method to get the current time. Is there a callback or something that continuously gets called to update your timeline? Or do you just have to create a timer to continuously call AudioQueueGetCurrentTime?

I don't remember any other way than timer, but I guess it is possible to implement periodic notifications without timer since you manually enqueue buffers and you know bitrate, etc...
As a side note: why not give a try to AVFoundation (available since iOS 4), it's all implemented there and until you have standard formats playback or you have to support iOS 3.x you don't really have to deal with AudioQueue horror.

Related

Background task for simple operations?

I need to play several sounds, and if one sound is already playing, the subsequent sounds will be queued.
I'm using MCI API calls.
Until now, I'm using this sound player as a COM control with InterOp.
However, the COM control causes some COM Release errors, so I would like to solve it differently.
I would like to know if I should solve this by a usual NET class in my form using a background task so that I'm completely indepenent on how heavy the load of my application is so that finished sounds can immediately trigger playing the next sound without any delay, or if that would be an overkill, and background tasks should only be used on long, blocking operations.
My sound player basically only uses a timer to check with an API call if the MCI has stopped playing and then starts the next sound.
Also, it provides events when a sound was started, stopped or errored-out.
Or if I should encapsulate the player in a separate NET project and reference it? Then I would also be independent on the workload of the main application.
Seamless playing of the sound queue would be essential to me.

WASAPI Event driven capture naudio

Is it possible to capture a line-in signal with NAudio in exclusive, event-driven mode?
If yes, is there an example available?
The WasapiCapture class does not support event mode currently. As it happens, I just did a check-in to support exclusive mode though, which will be part of NAudio 1.7 when it is released shortly. If you need event-driven mode, then you could just take a copy of the WasapiCapture class and make the modifications yourself.

Techniques for keeping iOS NSTimers from being killed in background

Relatively new to obj-c and iOS but I've created a little app that is built around 4 simple, stopwatch-style timers. The user starts up a timer, it starts counting and they go on with their life. When they open up the app they can see how long it's been going. Individually, these timers would be identical to the one in the Apple Clock app.
This works "most" of the time. I've had timers running for days, started, stopped, reset, started up again. What I've noticed though is that if the app is pushed just a little too far down the multitasking drawer, the next time I open up the app all my timers will be at zero, and the app will be in it's freshly launched state.
To me this appears related to the OS thinking my app is not needed, killing its threads/processes/whatever to free up memory. For the apps intended audience, it will be a frequently checked app so this may not crop up as a problem, but it seems that there should be some technical approach to ensure that my stopwatches never fail. I'm just not sure where to look for this kind of functionality. Any thoughts appreciated.
Thanks!
You are doing it wrong.
If you build a stopwatch app with 10 timers you need exactly one NSTimer.
And this timer refreshes the display. This timer is not needed when the app is in background.
NSTimers have the problem that they can be late, and they should not be used to schedule time critical stuff (like counting a second up).
Store the current NSDate when you start a timer and display the difference in the app.
Store those NSDates to NSUserDefaults and they will even survive a restart of the device.
Try this approach: For your "timers" keep track of the start time only, and for display purposes calculate the number of seconds since the start time and display accordingly.
Store these start times so that even if the app is "evicted", you'll still know when a timer was started and can calculate how much time has elapsed.
The only reason to use an NSTimer would be if you want to show the timer's seconds ticking by.
Also have a look at the delegate methods applicationDidEnterBackground and applicationWillEnterForeground. In these methods you'd want to invalidate and re-establish an NSTimer (if you're using one), respectively.
There is a lot of Apple documentation about what you can and cannot do in the background here.
Unfortunately there is no way to ensure that your app will continue to run in the background the way you want it to.
You can use -beginBackgroundTaskWithExpirationHandler: to request up to ten minutes of additional execution time but in all likelihood your application will be terminated.
I've heard an Apple engineer refer to this as 'evicting'.

Form transitions in VB.NET without using Timers

For couple of hobby projects of mine, I've been performing form transitions (fade-in/out, slide-left/right) using timer control, and I know its not the right way to do it. Using timer has its own disadvantages as it is a CPU hog if logic is complex and also, transitions are not smooth. So, I'm wondering how can I perform form transitions without using any timers and just by using native Windows API or any third-party library. I came across with FlowFX but found that it is limited only to .NET Compact framework.
Thanks...
We don't know what your timer handler is doing without a code sample, but using a timer to do animations is an acceptable method.
Here is another SO question and answer that might show you a better way of coding your timer handler.
The "not smooth" part could perhaps be overcome using double-buffering.

How does UIGestureRecognizer work?

How does UIGestureRecognizer work internally? Is it possible to emulate it in iOS < 3.2?
If you want a detailed explanation on how they work, it is worth watching this video from last year's WWDC.
See the video Deepak mentions for details, but yes, it is something you can build yourself if you want to.
Be sure to ask yourself a couple questions first, though: do you want to recreate the entire recognizer "framework", or just be able to recognize, say, a swipe? If the latter, there should be tons of examples on the web from pre 3.2 days of detecting swipes using the normal touch event handlers.
If you really want to recreate the framework, you can, and it's actually kind of an interesting exercise. The UIKit object does have some hooks into the event pipeline at earlier stages, but you can get a similar result by tracking the touches and building a pipeline of recognizer objects. If you read the docs on UIGestureRecognizer, you'll see that the state management that they use is pretty clearly laid out. You could copy that, and then just build you own custom MyPanGestureRecognizer, MySwipeGestureRecognizer, etc, that derive from a MyGestureRecognizer base. You should have some UIView subclass (MyGestureView) that handles all the touches and runs through its list of MyGestureRecognizers, using the state machine that's implied in the docs.