NSTimer as Alarm - objective-c

Is the best practice of setting an alarm on OS X to create a NSTimer scheduled for the number of seconds between the current time and the desired time for the alarm, or is there an alternative to that method?

That is the best practice, yes. Timers don't poll and don't otherwise consume resources until fired.
Timers aren't particularly accurate in the fine grain, but should be "good enough" at anything in the 100s of milliseconds or greater range.
One thing to consider, though, is that the system may go to sleep and this can interfere with your timer. If you need to prevent sleep, consider carefully the impact on battery life and read up on Power Management.

Related

Is it possible to set a Timer with condition in Mongoose OS?

I am familiar with using mgos_msleep(value) or mgos_usleep(value). However, using sleep is not good for the device.
Can someone suggest a better approach?
It depends on the architecture of the system and use case with you.
The sleep call in mongoose OS does not cause a busy wait. The sleep call conveys to OS to not schedule the particular process until the sleep duration is over. The 'mgos_usleep' function is a sleep with greater resolution (microseconds) which should have very less impact, however this in turn is based on your requirement / use case.
With respect to timer, the Mongoose OS supports both software timers and hardware timers. The decision to use either software timer or hardware timer is in turn based on the application requirement with you.
The mgos_set_timer sets up a software timer with milliseconds timeout and the respective callback. The software timer frequency is specified in milliseconds and the number of software timers is not limited.The software timer callback is executed in mongoose task context. This timer seems to be with fairly low accuracy and high jitter.
The mgos_set_hw_timer sets up a hardware timer with microseconds timeout and the respective callback. The hardware timer callback is executed in the ISR context and hence the actions are limited due it. The hardware timers or counters shall be available as per the type of processor that you use, hence you may need to have a look at the datasheet. Accordingly, the number of hardware timers is limited and the frequency is specified in microseconds.

Memory efficient way of using an NSTimer to update a MenuBar Mac application?

I am creating an application for Mac, in Objective C, which will run in the Menu-bar and do periodic Desktop operations (such as changing the wallpaper). I am creating the application so that it stays in the Menu bar at all times, allowing easy access to configuration options and other information. My main concern is how to schedule my app to run every X minutes to do the desktop operations.
The most common solution I have seen is using NSTimer, however, I am concerned that it will not be memory efficient (after reading the following page on Apple Developer docs. Using an NSTimer will prevent the laptop from going to sleep, and will need an always-running thread to check for when the NSTimer has elapsed. Is there a more memory-efficient way of using NSTimer to schedule these operations?
Alternately, is there a way to use LaunchD to initiate a call to my application (which is in the Menu bar) so that it can handle the event and do the desktop operations. I think that the second way is better, but am not sure if it is possible.
First, excellent instincts on keeping this low-impact. But you're probably over-worried in this particular case.
When they say "waking the system from an idle state" they don't mean system-level "sleep" where the screen goes black. They mean idle state. The CPU can take little mini-naps for fractions of a second when there isn't work that immediately needs to be done. This can dramatically reduce power requirements, even while the system is technically "awake."
The problem with having lots of timers flying around isn't so much their frequencies as their tolerances. Say one you have 10 timers with a 1 second frequency, but they're offset from each other by 100ms (just by chance of what time it was when they happened to start). That means the longest possible "gap" is 100ms. But if they were configured at 1 second with a 0.9 second tolerance (i.e. between 1s and 1.9s), then the system could schedule them all together, do a bunch of work, and spend most of the second idle. That's much better for power.
To be a good timer citizen, you should first set your timer at the interval you really want to do work. If it is common for your timer to fire, but all you do is check some condition and reschedule the timer, then you're wasting power. (Sounds like you already have this in hand.) And the second thing you should do is set a reasonable tolerance. The default is 0 which is a very small tolerance (it's not actually "0 tolerance," but it's very small compared to a minutes). For your kind of problem, I'd probably use a tolerance of at least 1s.
I highly recommend the Energy Best Practices talk from WWDC 2013. You may also be interested in the later Writing Energy Efficient Code sessions from 2014 and Achieving All-day Battery Life from 2015.
It is possible of course to do with with launchd, but it adds a lot of complexity, especially on installation. I don't recommend it for the problem you're describing.

Using mach_absolute_time() to slow down a simulation. Is this the proper way?

I am writing an application for OS X (Obj-C/Cocoa) that runs a simulation and displays the results to the user. In one case, I want the simulation to run in "real-time" so that the user can watch it go by at the same speed it would happen in real life. The simulation is run with a specific timestep, dt. Right now, I am using mach_absolute_time() to slow down the simulation. When I profile this code, I see that by far, most of my CPU time is spent in mach_absolute_time() and my CPU is pegged at 100%. Am I doing this right? I figured that if I'm slowing down the simulation such that the program isn't simulating anything most of the time then CPU usage should be down but mach_absolute_time() obviously isn't a "free call" so I feel like there might be a better way?
double nextT = mach_absolute_time();
while (runningSimulation)
{
if (mach_absolute_time() >= nextT)
{
nextT += dt_ns;
// Compute the next "frame" of the simulation
// ....
}
}
Do not spin at all.
That is the first rule of writing GUI apps where battery life and app responsiveness matter.
sleep() or nanosleep() can be made to work, but only if used on something other than the main thread.
A better solution is to use any of the time based constructs in GCD as that'll make more efficient use of system resources.
If you want the simulation to appear smooth to the user, you'll really want to lock the slowed version to the refresh rate of the screen. On iOS, there is CADisplayLink. I don't know of a direct equivalent on the Mac.
You are doing busy spinning. If there is lot of time before you need to simulate again considering sleeping instead.
But any sleep won't guarantee that it would sleep exactly for the duration specified. Depending on how accurate you want it to be, you can sleep for a little less and afterwards spin for the rest.

iOS: recreating countdown timers after clock change?

I'm working on an iOS app that uses an NSTimer for a countdown. This is prone to user tampering: if, for example, the user switches out of the app, closes the app manually, changes the device clock, and comes back in, the timer will have to be recreated. Another scenario: the user locks the device, it goes into low-power mode (which requires timers to be recreated), and the clock auto-sets before the game is opened again. If that happens, I won't have an accurate way of determining how much time has passed since the app was closed, since the device clock has changed.
Tl;dr: countdown timers sometimes have to be recreated after a device clock change. How is this problem usually handled?
Any time you're relying on the system clock for accurate timing you're going to have troubles, even if the user isn't deliberately tampering with the clock. Typically clock drift is corrected by slightly increasing or decreasing the length of a second to allow the clock to drift back into alignment over a period of minutes. If you need accurate timing, you can either use something like mach_absolute_time() which is related to the system uptime rather than the system clock, or you can use Grand Central Dispatch. The dispatch_after() function takes a dispatch_time_t which can either be expressed using wall time (e.g. system clock) or as an offset against DISPATCH_TIME_NOW (which ignores wall clock).
For future reference, in regard to different systems of timekeeping in OSX (and consequently iOS):
One way to measure the speed of any operation, including launch times,
is to use system routines to get the current time at the beginning and
end of the operation. Once you have the two time values, you can take
the difference and log the results.
The advantage of this technique is that it lets you measure the
duration of specific blocks of code. Mac OS X includes several
different ways to get the current time:
mach_absolute_time reads the CPU time base register and is the
basis for other time measurement functions.
The Core Services UpTime function provides nanosecond resolution
for time measurements.
The BSD gettimeofday function (declared in <sys/time.h>) provides
microsecond resolution. (Note, this function incurs some overhead but
is still accurate for most uses.)
In Cocoa, you can create an NSDate object with the current time at
the beginning of the operation and then use the
timeIntervalSinceDate: method to get the time difference.
Source: Launch Time Performance Guidelines, "Gathering Launch Time Metrics".

Calculating number of seconds between two points in time, in Cocoa, even when system clock has changed mid-way

I'm writing a Cocoa OS X (Leopard 10.5+) end-user program that's using timestamps to calculate statistics for how long something is being displayed on the screen. Time is calculated periodically while the program runs using a repeating NSTimer. [NSDate date] is used to capture timestamps, Start and Finish. Calculating the difference between the two dates in seconds is trivial.
A problem occurs if an end-user or ntp changes the system clock. [NSDate date] relies on the system clock, so if it's changed, the Finish variable will be skewed relative to the Start, messing up the time calculation significantly. My question:
1. How can I accurately calculate the time between Start and Finish, in seconds, even when the system clock is changed mid-way?
I'm thinking that I need a non-changing reference point in time so I can calculate how many seconds has passed since then. For example, system uptime. 10.6 has - (NSTimeInterval)systemUptime, part of NSProcessInfo, which provides system uptime. However, this won't work as my app must work in 10.5.
I've tried creating a time counter using NSTimer, but this isn't accurate. NSTimer has several different run modes and can only run one at a time. NSTimer (by default) is put into the default run mode. If a user starts manipulating the UI for a long enough time, this will enter NSEventTrackingRunLoopMode and skip over the default run mode, which can lead to NSTimer firings being skipped, making it an inaccurate way of counting seconds.
I've also thought about creating a separate thread (NSRunLoop) to run a NSTimer second-counter, keeping it away from UI interactions. But I'm very new to multi-threading and I'd like to stay away from that if possible. Also, I'm not sure if this would work accurately in the event the CPU gets pegged by another application (Photoshop rendering a large image, etc...), causing my NSRunLoop to be put on hold for long enough to mess up its NSTimer.
I appreciate any help. :)
Depending on what's driving this code, you have 2 choices:
For absolute precision, use mach_absolute_time(). It will give the time interval exactly between the points at which you called the function.
But in a GUI app, this is often actually undesirable. Instead, you want the time difference between the events that started and finished your duration. If so, compare [[NSApp currentEvent] timestamp]
Okay so this is a long shot, but you could try implementing something sort of like NSSystemClockDidChangeNotification available in Snow Leopard.
So bear with me here, because this is a strange idea and is definitely non-derterministic. But what if you had a watchdog thread running through the duration of your program? This thread would, every n seconds, read the system time and store it. For the sake of argument, let's just make it 5 seconds. So every 5 seconds, it compares the previous reading to the current system time. If there's a "big enough" difference ("big enough" would need to definitely be greater than 5, but not too much greater, to account for the non-determinism of process scheduling and thread prioritization), post a notification that there has been a significant time change. You would need to play around with fuzzing the value that constitutes "big enough" (or small enough, if the clock was reset to an earlier time) for your accuracy needs.
I know this is kind of hacky, but barring any other solution, what do you think? Might that, or something like that, solve your issue?
Edit
Okay so you modified your original question to say that you'd rather not use a watchdog thread because you are new to multithreading. I understand the fear of doing something a bit more advanced than you are comfortable with, but this might end up being the only solution. In that case, you might have a bit of reading to do. =)
And yeah, I know that something such as Photoshop pegging the crap out of the processor is a problem. Another (even more complicated) solution would be to, instead of having a watchdog thread, have a separate watchdog process that has top priority so it is a bit more immune to processor pegging. But again, this is getting really complicated.
Final Edit
I'm going to leave all my other ideas above for completeness' sake, but it seems that using the system's uptime will also be a valid way to deal with this. Since [[NSProcessInfo processInfo] systemUptime] only works in 10.6+, you can just call mach_absolute_time(). To get access to that function, just #include <mach/mach_time.h>. That should be the same value as returned by NSProcessInfo.
I figured out a way to do this using the UpTime() C function, provided in <CoreServices/CoreServices.h>. This returns Absolute Time (CPU-specific), which can easily be converted into Duration Time (milliseconds, or nanoseconds). Details here: http://www.meandmark.com/timingpart1.html (look under part 3 for UpTime)
I couldn't get mach_absolute_time() to work properly, likely due to my lack of knowledge on it, and not being able to find much documentation on the web about it. It appears to grab the same time as UpTime(), but converting it into a double left me dumbfounded.
[[NSApp currentEvent] timestamp] did work, but only if the application was receiving NSEvents. If the application went into the foreground, it wouldn't receive events, and [[NSApp currentEvent] timestamp] would simply continue to return the same old timestamp again and again in an NSTimer firing method, until the end-user decided to interact with the app again.
Thanks for all your help Marc and Mike! You both definitely sent me in the right direction leading to the answer. :)