Disabling or gaming Sonos Crossfade - sonos

For short pieces of content, the Sonos crossfade feature is not desirable and ideally I'd like to be able to disable it for tracks with a small duration but this does not appear to be possible via the API. If it is not possible, I'd like to attempt to game it. Is crossfade implemented with a fixed duration or is there some kind of signal analysis going on?

There is currently no way to adjust cross-fade through APIs. It is done with a fixed timing of 12 seconds (6 seconds for the end of one track, 6 seconds for the beginning of next track).

Related

repetitive calls to beginBackgroundTaskWithExpirationHandler

I am evaluating some code that is stacking calls to beginBackgroundTaskWithExpirationHandler in a effort to leave a timer in the background. Have to admit that it is a pretty clever idea, but not sure if this is best practice.
So the flow:
Call beginBackgroundTaskWithExpirationHandler with a callback handler
When it returns, do something, then call again
Rinse and repeat, checking for TaskInvalid along the way
I know that 180 seconds is the max time, but that this can be shorter in some cases.
To the questions:
1: Is this legal?
2: Would you suspect that Apple would be OK with giving the app 3 minutes of background over and over, thus leaving the process in the background for say a hour?
3: Would you count on this?
Thanks in advance!
Would you suspect that Apple would be OK with giving the app 3 minutes of background over and over, thus leaving the process in the background for say a hour?
No, Apple would not be OK with it, even if you could do it. They specified a 3 minute limit for a reason, to ensure that we don't have apps running in the background without the user's knowledge, consuming CPU cycles, memory, and draining the battery. (In fact, the "finite task" limit used to be 5 minutes, but several years ago Apple further restricted it to just 3 minutes.) Imagine a world where all app developers were routinely circumventing this 3 minute limit, our devices would have their batteries drained quickly and foreground apps would be less responsive and have less memory with which to operate.
Having said that, Apple has identified a very narrow set of operations that are acceptable to keep running in the background (VOIP, navigation apps, music apps, bluetooth operation, etc.), where the user has reasonable expectations about the battery and performance impact.
There are are also classes of tasks that employ some limited background capabilities (e.g. requesting time to complete some finite-length task, opportunistic periodic background fetch, significant change location services, giving the user a chance to respond to push or local notifications, etc.). The intent is to offer a meaningful balance between background capabilities while minimizing battery impact.
Bottom line, Apple otherwise discourages/curtails the use of indiscriminate background operation. In the App Store Guidelines, they explicitly say
2.16 Multitasking Apps may only use background services for their intended purposes: VoIP, audio playback, location, task completion, local notifications, etc.
...
4.5 Apps using background location services must provide a reason that clarifies the purpose of the use, using mechanisms described in the Human Interface Guidelines
Having said all of this, if you describe what precisely you need this background operation for, we might be able to describe which of the multitude of different background capabilities that Apple offers you could use to achieve the desired affect. All of these interfaces are designed to solve specific problems while balancing functionality with the scarce resources on our devices. But if it's something like "hey, I want to ping my server every five minutes", then no, Apple will frown upon that.
For more information, this is discussed in some detail in the Background Execution chapter of the App Programming Guide for iOS.

iBeacon Monitoring with Unreliable Results (didEnterRegion & didExitRegion)

I'm currently working on an iOS app that ranges and monitors an iBeacon in order to be able to do some actions and receive notifications.
Ranging is working flawlessly, but I'm having troubles with the beacon monitoring and the notifications. I've researched quite a bit about the subject and I'm aware that CoreLocation framework has usually problems like this, but I was wondering how other devs are fixing/approaching this.
Basically, I'm showing local notifications when didEnterRegion and didExitRegion methods are fired. Unfortunately, these two methods are being fired quite often (in an unreliable fashion), even when the iBeacon is right next to it, although sometimes is works perfectly, which makes it more annoying.
I've tried lowering the iBeacon advertising interval, and although it helped, it didn't fix the issue completely. Now, I'm trying with a logic filter where I ignore firing the notification if the enter or exit event happened in the last X minutes (I'm thinking of a 'magic' number between 5 and 15).
Is anyone having the same problems? Would adding a 2nd iBeacon to the situation help? (maybe monitor both of them, and filter logically the exit and enter events based on those two inputs?).
I was also thinking in adding another layer of data to show notifications, maybe based on GPS or Wifi info. Has anyone tried this?
Any other idea? I'm open to any recommendation.
Just in case, I'm using Estimote iBeacons and iOS9 (Objective-c).
Thanks for your time!
Intermittent region exit/entry events are a common problem, and are typically solved with a timer-baded software filter exactly as you suggest. The specifics of how you set up the filter (the minimum time to wait for a re-entry after an exit before processing exit logic) varies for each use case so it is good to have it under your control.
Understand that region exits are caused by iOS not detecting any Bluetooth advertisements from a beacon in the CLBeaconRegion for 30 seconds. If two detected packets are 31 seconds apart, you will get a region exit and then a region entry one second later.
This commonly happens with low signal levels. If an iOS device is on the outer edge of the beacon's transmission range, only a small percentage of packets will be received. With a beacon transmitting at 1Hz, if 30 packets in a row are missed, the iOS device will get an exit event.
There are several things you can do to reduce this problem in a specific area where you want solid coverage:
Turn your beacon transmitter power up to the maximum. This will give stronger signal levels and fewer missed packets in the area you care about.
Turn the advertising rate to the maximum. Advertising at 10 Hz gives 10x as many packets to receive as 1 Hz.
If needed add additional beacons with the same identifier to increase coverage.
Of course, there are costs to the above, including reduced battery life at high advertising rates and transmitter power levels.
Even if you do all of the above, you still need the software filter, because there will always be a point where you are on the edge if the nearest beacon's transmission radius.
You can see an example of software filter code in my answer here.
Beacons emit a pulsing signal. Ranging also performs an intermittent scan (roughly every 100 ms). This means that it is possible for your device to miss beacon for a few seconds in a row, which can cause the results you are experiencing. You can log the beacons RSSI value in this method:
- (void)locationManager:(CLLocationManager *)manager didRangeBeacons:(NSArray *)iBeacons inRegion:(CLBeaconRegion *)region
I believe you will see a lot of zero values before seeing didExitRegion being called. This isn't a fault in your code, or with the beacon. This just has to do with the fact that neither the signal being emitted or the detection are constant. They are pulsing. These problems can occur while the beacon is just sitting on the same desk as your device, and can be exaggerated in a real world setting when the signals are blocked by physical objects and people.
I would use ranging to determine more accurately if your beacon is around. Note that ranging, especially in the background can have a significant battery draw.

iOS timing every ms

I am trying to interface through the microphone jack on the iPhone.
I need to update 15 bits constantly and I'm wondering if the best way to do this would be as follows:
I have a 16ms 'frame'. The first 1ms is the START bit and it is 500mV. The next 15ms are either 0V or 250mV. It would then repeat with the START bit.
Can I accurately scan this quickly on iOS?
In a word, no. The best you can get is about every 5ms but that's nowhere near stable enough to write an app around it. A safe margin is 30ms or so (once per 'frame' akin to a video framerate of 30fps).

execute functions with millisecond accuracy based on timer

I have an array of floats which represent time values based on when events were triggered over a period of time.
0: time stored: 1.68
1: time stored: 2.33
2: time stored: 2.47
3: time stored: 2.57
4: time stored: 2.68
5: time stored: 2.73
6: time stored: 2.83
7: time stored: 2.92
8: time stored: 2.98
9: time stored: 3.05
I would now like to start a timer and when the timer hits 1 second 687 milliseconds - the first position in the array - for an event to be triggered/method execution.
and when the timer hits 2 seconds and 337 milliseconds for a second method execution to be triggered right till the last element in the array at 3 seconds and 56 milliseconds for the last event to be triggered.
How can i mimick something like this? i need something with high accuracy
I guess what im essentially asking is how to create a metronome with high precision method calls to play the sound back on time?
…how to create a metronome with high precision method calls to play the sound back on time?
You would use the audio clock, which has all the accuracy you would typically want (the sample rate for audio playback -- e.g. 44.1kHz) - not an NSTimer.
Specifically, you can use a sampler (e.g. AudioUnit) and schedule MIDI events, or you can fill buffers with your (preloaded) click sounds' sample data in your audio streaming callback at the sample positions determined by the tempo.
To maintain 1ms or better, you will need to always base your timing off the audio clock. This is really very easy because your tempo shall dictate an interval of frames.
The tough part (for most people) is getting used to working in realtime contexts and using the audio frameworks, if you have not worked in that domain previously.
Look into dispatch_after(). You'd create a target time for it using something like dispatch_time(DISPATCH_TIME_NOW, 1.687000 * NSEC_PER_SEC).
Update: if you only want to play sounds at specific times, rather than do arbitrary work, then you should use an audio API that allows you to schedule playback at specific times. I'm most familiar with the Audio Queue API. You would create a queue and create 2 or 3 buffers. (2 if the audio is always the same. 3 if you dynamically load or compute it.) Then, you'd use AudioQueueEnqueueBufferWithParameters() to queue each buffer with a specific start time. The audio queue will then take care of playing as close as possible to that requested start time. I doubt you're going to beat the precision of that by manually coding an alternative. As the queue returns processed buffers to you, you refill it if necessary and queue it at the next time.
I'm sure that AVFoundation must have a similar facility for scheduling playback at specific time, but I'm not familiar with it.
To get high precision timing you'd have to jump down a programming level or two and utilise something like the Core Audio Unit framework, which offers sample-accurate timing (at 44100kHz, samples should occur around every 0.02ms).
The drawback to this approach is that to get such timing performance, Core Audio Unit programming eschews Objective-C for a C/C++ approach, which is (in my opinion) tougher to code than Objective-C. The way Core Audio Units work is also quite confusing on top of that, especially if you don't have a background in audio DSP.
Staying in Objective-C, you probably know that NSTimers are not an option here. Why not check out the AVFoundation framework? It can be used for precise media sequencing, and with a bit of creative sideways thinking, and the AVURLAssetPreferPreciseDurationAndTimingKey option of AVURLAsset, you might be able to achieve what you want without using Core Audio Units.
Just to fill out more about AVFoundation, you can place instances of AVAsset into an AVMutableComposition (via AVMutableCompositionTrack objects), and then use AVPlayerItem objects with an AVPlayer instance to control the result. The AVPlayerItem notification AVPlayerItemDidPlayToEndTimeNotification (docs) can be used to determine when individual assets finish, and the AVPlayer methods addBoundaryTimeObserverForTimes:queue:usingBlock: and addPeriodicTimeObserverForInterval:queue:usingBlock: can provide notifications at arbitrary times.
With iOS, if your app will be playing audio, you can also get this all to run on the background thread, meaning you can keep time whilst your app is in the background (though a warning, if it does not play audio, Apple might not accept your app using this background mode). Check out UIBackgroundModes docs for more info.

how to sync audio in iphone sdk with NStimer?

I am running a countdown timer which updates time on a label.i want to play a tick sound every second.i have the sound file but it is not sync perfectly with time.how do i sync time with audio?also if i use uiimagepicker the sound stops how to manage this? if someone have tick sound like a clock has it would be great.
The best way to sync up your sound and time would be to actually play short - less than a second long - sound files (tick sound) once per second as TSTimer fires. It won't sound as nice as a real clock or chronometer ticking, but it would be easy to do. And if the sounds are that small, then you don't have to worry too much about latency. I think to be realistic you need to play two ticks per second, with the first and second ticks about 0.3 seconds apart, and the next one starting at the next second, with the fourth, again only about 0.3 seconds later. And so on.
For even tighter integration of sounds and GUI, you should read up on Audio Toolbox:
Use the Audio Toolbox framework to play audio with synchronization capabilities, access packets of incoming audio, parse audio streams, convert audio formats, and record audio with access to individual packets. For details, see Audio Toolbox Framework Reference and the SpeakHere sample code project.