iOS timing every ms - objective-c

I am trying to interface through the microphone jack on the iPhone.
I need to update 15 bits constantly and I'm wondering if the best way to do this would be as follows:
I have a 16ms 'frame'. The first 1ms is the START bit and it is 500mV. The next 15ms are either 0V or 250mV. It would then repeat with the START bit.
Can I accurately scan this quickly on iOS?

In a word, no. The best you can get is about every 5ms but that's nowhere near stable enough to write an app around it. A safe margin is 30ms or so (once per 'frame' akin to a video framerate of 30fps).

Related

Memory efficient way of using an NSTimer to update a MenuBar Mac application?

I am creating an application for Mac, in Objective C, which will run in the Menu-bar and do periodic Desktop operations (such as changing the wallpaper). I am creating the application so that it stays in the Menu bar at all times, allowing easy access to configuration options and other information. My main concern is how to schedule my app to run every X minutes to do the desktop operations.
The most common solution I have seen is using NSTimer, however, I am concerned that it will not be memory efficient (after reading the following page on Apple Developer docs. Using an NSTimer will prevent the laptop from going to sleep, and will need an always-running thread to check for when the NSTimer has elapsed. Is there a more memory-efficient way of using NSTimer to schedule these operations?
Alternately, is there a way to use LaunchD to initiate a call to my application (which is in the Menu bar) so that it can handle the event and do the desktop operations. I think that the second way is better, but am not sure if it is possible.
First, excellent instincts on keeping this low-impact. But you're probably over-worried in this particular case.
When they say "waking the system from an idle state" they don't mean system-level "sleep" where the screen goes black. They mean idle state. The CPU can take little mini-naps for fractions of a second when there isn't work that immediately needs to be done. This can dramatically reduce power requirements, even while the system is technically "awake."
The problem with having lots of timers flying around isn't so much their frequencies as their tolerances. Say one you have 10 timers with a 1 second frequency, but they're offset from each other by 100ms (just by chance of what time it was when they happened to start). That means the longest possible "gap" is 100ms. But if they were configured at 1 second with a 0.9 second tolerance (i.e. between 1s and 1.9s), then the system could schedule them all together, do a bunch of work, and spend most of the second idle. That's much better for power.
To be a good timer citizen, you should first set your timer at the interval you really want to do work. If it is common for your timer to fire, but all you do is check some condition and reschedule the timer, then you're wasting power. (Sounds like you already have this in hand.) And the second thing you should do is set a reasonable tolerance. The default is 0 which is a very small tolerance (it's not actually "0 tolerance," but it's very small compared to a minutes). For your kind of problem, I'd probably use a tolerance of at least 1s.
I highly recommend the Energy Best Practices talk from WWDC 2013. You may also be interested in the later Writing Energy Efficient Code sessions from 2014 and Achieving All-day Battery Life from 2015.
It is possible of course to do with with launchd, but it adds a lot of complexity, especially on installation. I don't recommend it for the problem you're describing.

Disabling or gaming Sonos Crossfade

For short pieces of content, the Sonos crossfade feature is not desirable and ideally I'd like to be able to disable it for tracks with a small duration but this does not appear to be possible via the API. If it is not possible, I'd like to attempt to game it. Is crossfade implemented with a fixed duration or is there some kind of signal analysis going on?
There is currently no way to adjust cross-fade through APIs. It is done with a fixed timing of 12 seconds (6 seconds for the end of one track, 6 seconds for the beginning of next track).

Game Maker framerate and other timing related things

What's the framerate in GM games and how often is the game code executed per frame? I can't find the answer anywhere explained so that I would understand it. Are there ways to change these? I'm used to solid 60 fps and game code executed once per frame. This is important since I'm used to programming in frame timing, meaning that a one frame is the smallest unit of time that can be used, and counters are incremented (or decremented) once per frame. This also means that drops in framerate will create slowdown instead of frames begin skipped. The game code in the game I've been programming on another program basically runs the game code once an then waits for a VBlank to happen before running the game code again.
Let me explain how GM works. It's complicated.
When the game starts, the game start event from every object is called. I believe this happens AFTER the create event.
Once per (virtual) frame, the step event is called. You see, GM doesn't lock itself down to whatever the room_speed is, it will run as fast as it can. (If not compiled.) fps_real shows you how many frames per second the engine is ACTUALLY pumping though in any given second.
So every second, assuming the processor and GPU can keep up with the room_speed, room_speed amount of step events occur. Take this situation for example:
room_speed is 60
fps is 60
fps_real is 758
Let's assume there is an ObjPlayer with a step event.
In this situation, ObjPlayer's step event will be run 60 times in that second.
This is a problem, however. Let's say, if the keyboard space button is held down, the player moves 3 pixels to the left. So assuming the game is running at full speed (room_speed), the player will move 3 * 60, or in this case, 180 pixels in any given second.
However, let's say that the CPU and/or GPU can't keep up with 60 FPS. Let's say it's holding steady at 30. That would mean that the player will move a mere 3 * 30, or 90 pixels a second, making the game look much slower than it should. But if you have played a AAA game like Hitman Absolution, you will notice that the game looks just as fast at 30 FPS as it does at 120 FPS. How? Delta time.
Using delta time, you set the room_speed at max, (9999) and every time you call upon a pixels-per-frame speed, multiply it by a delta time that has been worked over to work with 60 FPS. I am explaining it terribly, and it's a lot easier than I make it out to be. Check out this guide on the GMC to see how to do it.
When you have delta time, you don't need to worry (as much) about the FPS -- it looks the same in terms of speed no matter what.
But as for refresh rate, I am not 100% sure, but I believe that GM games are 60hz, I could be wrong, but that's what I've heard. All GM games are also 32 bit.
The variables room_speed and fps may be what you are looking for. There is no point in increasing fps to anything higher than room_speed, which can be modified during program execution or statically through the room editor and is 30 by default.
You can change the room speed in the room settings, this is the number of steps per second. This property can also be accessed and changed mid-game with the room_speed global.
You can also access the actual screen update speed with the fps read-only global. Note: screen fps is totally independent from the draw event, which happens at the end of each step.

execute functions with millisecond accuracy based on timer

I have an array of floats which represent time values based on when events were triggered over a period of time.
0: time stored: 1.68
1: time stored: 2.33
2: time stored: 2.47
3: time stored: 2.57
4: time stored: 2.68
5: time stored: 2.73
6: time stored: 2.83
7: time stored: 2.92
8: time stored: 2.98
9: time stored: 3.05
I would now like to start a timer and when the timer hits 1 second 687 milliseconds - the first position in the array - for an event to be triggered/method execution.
and when the timer hits 2 seconds and 337 milliseconds for a second method execution to be triggered right till the last element in the array at 3 seconds and 56 milliseconds for the last event to be triggered.
How can i mimick something like this? i need something with high accuracy
I guess what im essentially asking is how to create a metronome with high precision method calls to play the sound back on time?
…how to create a metronome with high precision method calls to play the sound back on time?
You would use the audio clock, which has all the accuracy you would typically want (the sample rate for audio playback -- e.g. 44.1kHz) - not an NSTimer.
Specifically, you can use a sampler (e.g. AudioUnit) and schedule MIDI events, or you can fill buffers with your (preloaded) click sounds' sample data in your audio streaming callback at the sample positions determined by the tempo.
To maintain 1ms or better, you will need to always base your timing off the audio clock. This is really very easy because your tempo shall dictate an interval of frames.
The tough part (for most people) is getting used to working in realtime contexts and using the audio frameworks, if you have not worked in that domain previously.
Look into dispatch_after(). You'd create a target time for it using something like dispatch_time(DISPATCH_TIME_NOW, 1.687000 * NSEC_PER_SEC).
Update: if you only want to play sounds at specific times, rather than do arbitrary work, then you should use an audio API that allows you to schedule playback at specific times. I'm most familiar with the Audio Queue API. You would create a queue and create 2 or 3 buffers. (2 if the audio is always the same. 3 if you dynamically load or compute it.) Then, you'd use AudioQueueEnqueueBufferWithParameters() to queue each buffer with a specific start time. The audio queue will then take care of playing as close as possible to that requested start time. I doubt you're going to beat the precision of that by manually coding an alternative. As the queue returns processed buffers to you, you refill it if necessary and queue it at the next time.
I'm sure that AVFoundation must have a similar facility for scheduling playback at specific time, but I'm not familiar with it.
To get high precision timing you'd have to jump down a programming level or two and utilise something like the Core Audio Unit framework, which offers sample-accurate timing (at 44100kHz, samples should occur around every 0.02ms).
The drawback to this approach is that to get such timing performance, Core Audio Unit programming eschews Objective-C for a C/C++ approach, which is (in my opinion) tougher to code than Objective-C. The way Core Audio Units work is also quite confusing on top of that, especially if you don't have a background in audio DSP.
Staying in Objective-C, you probably know that NSTimers are not an option here. Why not check out the AVFoundation framework? It can be used for precise media sequencing, and with a bit of creative sideways thinking, and the AVURLAssetPreferPreciseDurationAndTimingKey option of AVURLAsset, you might be able to achieve what you want without using Core Audio Units.
Just to fill out more about AVFoundation, you can place instances of AVAsset into an AVMutableComposition (via AVMutableCompositionTrack objects), and then use AVPlayerItem objects with an AVPlayer instance to control the result. The AVPlayerItem notification AVPlayerItemDidPlayToEndTimeNotification (docs) can be used to determine when individual assets finish, and the AVPlayer methods addBoundaryTimeObserverForTimes:queue:usingBlock: and addPeriodicTimeObserverForInterval:queue:usingBlock: can provide notifications at arbitrary times.
With iOS, if your app will be playing audio, you can also get this all to run on the background thread, meaning you can keep time whilst your app is in the background (though a warning, if it does not play audio, Apple might not accept your app using this background mode). Check out UIBackgroundModes docs for more info.

mach_msg_trap, - (void) mouseDragged and timer performance

Leopard 10.5.8, XCode 3.1.1; using runModalForWindow to implement (what is intended to be) a high performance mouse tracking mechanism where I have to do real-time complex bitmap modifications.
The modal loop runs, the timer fires, the mouse tracks... but the performance is abysmal, and it gets worse and worse the longer the runloop goes on. Instead of catching mouse messages every pixel or so, I get them every 5... 10.... 20 seconds.
Instruments shows that the majority of the time during this growing response bottleneck is being spent in mach_msg_trap (and yes, I have the perspective set to the running app), so the impression I am under is that it "thinks" it doesn't have any work to do, despite the fact that I'm dragging the mouse around with the button held down like a crazy person. There are no memory leaks showing up, and in my 8-core 2.8 GHZ machine, there's almost no CPU activity going on.
Again, the app is not spending much time in my code... so it's not a performance problem of mine. I've probably configured something wrong, or failed to configure it at all, or am simply approaching the whole idea wrong -- but I sure would appreciate some insight here. As it stands now, the dispatch of mouse messages and timer messages is absolutely unacceptable. You couldn't implement a crayon drawing program for someone immersed in cold molasses with the response times I'm getting.
EDIT: Some additional info: doesn't happen on my 10.5.8 macbook pro. Just the 8--core, 6-display Mac Pro. I tried taking the display code for the croprect in drawrect out, replaced it with an NSLog()... still drags on issuing mouse updates. Also tried rebooting and running without the usual complement of apps running. And with mirrored displays. No difference.
Imagine dragging a brush across the screen; at first, is paints smoothly, then gaps appear between brush placements, then they get larger, and this goes on until you're only getting one brush placement every 10 seconds. That's how this acts. Using NSlog() and various other tracking methods, I've determined that it is at least at the highest level occurring because the mouseDragged events slow down to a trickle. The question in a nutshell is, why would that happen?
Anyone?
OK, isolated it -- the problem comes from my Wacom Tablet mouse. Plug in a regular optical mouse, and everything runs great. Same thing on my Macbook pro, using the trackpad. Works fine.
The tablet is a Wacom Intuos 4 with the stock drivers as of January 2011. I'm going over to the Wacom site and reporting this next.
What a nightmare that was. I have spent over 100 hours on this, thinking I'd hosed some subtlety in the app handling, drawing, etc. Sheesh.