iOS: recreating countdown timers after clock change? - objective-c

I'm working on an iOS app that uses an NSTimer for a countdown. This is prone to user tampering: if, for example, the user switches out of the app, closes the app manually, changes the device clock, and comes back in, the timer will have to be recreated. Another scenario: the user locks the device, it goes into low-power mode (which requires timers to be recreated), and the clock auto-sets before the game is opened again. If that happens, I won't have an accurate way of determining how much time has passed since the app was closed, since the device clock has changed.
Tl;dr: countdown timers sometimes have to be recreated after a device clock change. How is this problem usually handled?

Any time you're relying on the system clock for accurate timing you're going to have troubles, even if the user isn't deliberately tampering with the clock. Typically clock drift is corrected by slightly increasing or decreasing the length of a second to allow the clock to drift back into alignment over a period of minutes. If you need accurate timing, you can either use something like mach_absolute_time() which is related to the system uptime rather than the system clock, or you can use Grand Central Dispatch. The dispatch_after() function takes a dispatch_time_t which can either be expressed using wall time (e.g. system clock) or as an offset against DISPATCH_TIME_NOW (which ignores wall clock).

For future reference, in regard to different systems of timekeeping in OSX (and consequently iOS):
One way to measure the speed of any operation, including launch times,
is to use system routines to get the current time at the beginning and
end of the operation. Once you have the two time values, you can take
the difference and log the results.
The advantage of this technique is that it lets you measure the
duration of specific blocks of code. Mac OS X includes several
different ways to get the current time:
mach_absolute_time reads the CPU time base register and is the
basis for other time measurement functions.
The Core Services UpTime function provides nanosecond resolution
for time measurements.
The BSD gettimeofday function (declared in <sys/time.h>) provides
microsecond resolution. (Note, this function incurs some overhead but
is still accurate for most uses.)
In Cocoa, you can create an NSDate object with the current time at
the beginning of the operation and then use the
timeIntervalSinceDate: method to get the time difference.
Source: Launch Time Performance Guidelines, "Gathering Launch Time Metrics".

Related

Memory efficient way of using an NSTimer to update a MenuBar Mac application?

I am creating an application for Mac, in Objective C, which will run in the Menu-bar and do periodic Desktop operations (such as changing the wallpaper). I am creating the application so that it stays in the Menu bar at all times, allowing easy access to configuration options and other information. My main concern is how to schedule my app to run every X minutes to do the desktop operations.
The most common solution I have seen is using NSTimer, however, I am concerned that it will not be memory efficient (after reading the following page on Apple Developer docs. Using an NSTimer will prevent the laptop from going to sleep, and will need an always-running thread to check for when the NSTimer has elapsed. Is there a more memory-efficient way of using NSTimer to schedule these operations?
Alternately, is there a way to use LaunchD to initiate a call to my application (which is in the Menu bar) so that it can handle the event and do the desktop operations. I think that the second way is better, but am not sure if it is possible.
First, excellent instincts on keeping this low-impact. But you're probably over-worried in this particular case.
When they say "waking the system from an idle state" they don't mean system-level "sleep" where the screen goes black. They mean idle state. The CPU can take little mini-naps for fractions of a second when there isn't work that immediately needs to be done. This can dramatically reduce power requirements, even while the system is technically "awake."
The problem with having lots of timers flying around isn't so much their frequencies as their tolerances. Say one you have 10 timers with a 1 second frequency, but they're offset from each other by 100ms (just by chance of what time it was when they happened to start). That means the longest possible "gap" is 100ms. But if they were configured at 1 second with a 0.9 second tolerance (i.e. between 1s and 1.9s), then the system could schedule them all together, do a bunch of work, and spend most of the second idle. That's much better for power.
To be a good timer citizen, you should first set your timer at the interval you really want to do work. If it is common for your timer to fire, but all you do is check some condition and reschedule the timer, then you're wasting power. (Sounds like you already have this in hand.) And the second thing you should do is set a reasonable tolerance. The default is 0 which is a very small tolerance (it's not actually "0 tolerance," but it's very small compared to a minutes). For your kind of problem, I'd probably use a tolerance of at least 1s.
I highly recommend the Energy Best Practices talk from WWDC 2013. You may also be interested in the later Writing Energy Efficient Code sessions from 2014 and Achieving All-day Battery Life from 2015.
It is possible of course to do with with launchd, but it adds a lot of complexity, especially on installation. I don't recommend it for the problem you're describing.

What does post-layout simulation take a long time?

I am wondering why post-layout simulations for digital designs take a long time?
Why can't software just figure out a chip's timing and model the behavior with a program that creates delays with sleep() or something? My guess is that sleep() isn't accurate enough to model hardware, but I'm not sure.
So, what is it actually doing that takes so long?
Thanks!
Post layout simulations (in fact - anything post synthesis) will be simulating gates rather than RTL, and there's a lot of gates.
I think you've got your understanding of how a simulator works a little confused. I say that because a call like sleep() is related to waiting for time as measured by the clock on the wall, not the simulator time counter. Simulator time advances however quickly the simulator runs.
A simulator is a loop that evaluates the system state. Each iteration of the loop is a 'time slice' e.g. what the state of the system is at time 100ns. It only advances from one time slice to the next when all the signals in it have reached a steady state.
In an RTL or untimed gate simulation, most evaluation of signals happens in 'zero time', which is to say that the effect of evaluating an assignment happens in the same time slice. The one exception tends to be the clock, which is defined to change at a certain time and it causes registers to fire, which causes them to change their output, which causes processes, modules, assignments which have inputs from registers to re-evaluate, which causes other signals to change, which causes other processes to re-evaluate, etc, etc, etc.... until everything has settled down, and we can move to the next clock edge.
In a post layout simulation, with back-annotated timing, every gate in the system has a time from input to output associated with it. This means nothing happens in 'zero time' any more. The simulator now has put the effect of every assignment on a list saying 'signal b will change to 1 at time 102.35ns'. Every gate has different timing. Every input on every gate will have different timing to the output. This means that a back-annotated simulation has to evaluate lots and lots of time slices as signals are changing state at lot's of different times. Not just when the clock changes. There probably isn't much happening in each slice, but there's lots of them.
...and I've only talked about adding gate timing. Add wire timing and things get even more complex.
Basically there's a whole lot more to worry about, and so the sims get slower.

iPhone hardware & user timers getting out of sync over time

I'm trying to detect if the user advances their clock while an app is running. I'm currently doing this by comparing how two timers change: [NSDate timeIntervalSinceReferenceDate] and mach_absolute_time.
The basic algorithm is this:
At the start of the app, save startUserClock (timeIntervalSinceReferenceDate) and startSystemClock (mach_absolute_time converted to seconds)
Periodically diff the current values of the timers against their respective start values.
If the diff's are different (within some margin of error), we should know that the timers are out of sync, which indicates a clock change - theoretically the only time this should be possible is if the user has modified their clock.
However, it seems that mach_absolute_time is growing at a slightly faster rate than timeIntervalSinceReferenceDate. In the short term, this isn't a huge issue, but over time the difference adds up and we start seeing a lot of false positives.
This problem appears to be hardware dependent. I don't see it at all on the iPad 1 I have, but a colleague sees it on his iPad 2 and I see it in the simulator.
I've confirmed that there isn't a problem with my conversion of mach_absolute_time to seconds by replacing it with CACurrentMediaTime (which uses mach_absolute_time under the hood). I've tried changing timeIntervalSinceReferenceDate to other timing methods (ie: CFAbsoluteTimeGetCurrent).
Is there something wrong with my base assumption that the timers should grow at the same rate? My thinking was that unless something is fundamentally wrong they shouldn't get so out of sync - they're both determining time, just starting from different points.
Is there a better way to do this? I need a completely offline solution - we can't assume an internet connection.
You could try using gettimeofday to get the value of the current system time. This returns the time since the epoch.
In general, timers aren't guaranteed to be in synch. Perhaps they're derived from different sources. Perhaps one is synchronized against a time server.
You only care about sudden changes in the difference between the clocks, not large drifts over time. So compute the rate of change of the difference between the clocks:
T_Diff := initial difference between clocks
T_Time := time as determined by one clock
Repeat forever:
T_Diff' := new difference between clocks
T_Time' := new time as determined by one clock
# Check if time changed
if abs((T_Diff'-T_Diff)/(T_Time'-T_Time)) > threshold:
time_changed()
# Update state
T_Diff := T_Diff'
T_Time := T_Time'
wait_for_next_update()

Calculating number of seconds between two points in time, in Cocoa, even when system clock has changed mid-way

I'm writing a Cocoa OS X (Leopard 10.5+) end-user program that's using timestamps to calculate statistics for how long something is being displayed on the screen. Time is calculated periodically while the program runs using a repeating NSTimer. [NSDate date] is used to capture timestamps, Start and Finish. Calculating the difference between the two dates in seconds is trivial.
A problem occurs if an end-user or ntp changes the system clock. [NSDate date] relies on the system clock, so if it's changed, the Finish variable will be skewed relative to the Start, messing up the time calculation significantly. My question:
1. How can I accurately calculate the time between Start and Finish, in seconds, even when the system clock is changed mid-way?
I'm thinking that I need a non-changing reference point in time so I can calculate how many seconds has passed since then. For example, system uptime. 10.6 has - (NSTimeInterval)systemUptime, part of NSProcessInfo, which provides system uptime. However, this won't work as my app must work in 10.5.
I've tried creating a time counter using NSTimer, but this isn't accurate. NSTimer has several different run modes and can only run one at a time. NSTimer (by default) is put into the default run mode. If a user starts manipulating the UI for a long enough time, this will enter NSEventTrackingRunLoopMode and skip over the default run mode, which can lead to NSTimer firings being skipped, making it an inaccurate way of counting seconds.
I've also thought about creating a separate thread (NSRunLoop) to run a NSTimer second-counter, keeping it away from UI interactions. But I'm very new to multi-threading and I'd like to stay away from that if possible. Also, I'm not sure if this would work accurately in the event the CPU gets pegged by another application (Photoshop rendering a large image, etc...), causing my NSRunLoop to be put on hold for long enough to mess up its NSTimer.
I appreciate any help. :)
Depending on what's driving this code, you have 2 choices:
For absolute precision, use mach_absolute_time(). It will give the time interval exactly between the points at which you called the function.
But in a GUI app, this is often actually undesirable. Instead, you want the time difference between the events that started and finished your duration. If so, compare [[NSApp currentEvent] timestamp]
Okay so this is a long shot, but you could try implementing something sort of like NSSystemClockDidChangeNotification available in Snow Leopard.
So bear with me here, because this is a strange idea and is definitely non-derterministic. But what if you had a watchdog thread running through the duration of your program? This thread would, every n seconds, read the system time and store it. For the sake of argument, let's just make it 5 seconds. So every 5 seconds, it compares the previous reading to the current system time. If there's a "big enough" difference ("big enough" would need to definitely be greater than 5, but not too much greater, to account for the non-determinism of process scheduling and thread prioritization), post a notification that there has been a significant time change. You would need to play around with fuzzing the value that constitutes "big enough" (or small enough, if the clock was reset to an earlier time) for your accuracy needs.
I know this is kind of hacky, but barring any other solution, what do you think? Might that, or something like that, solve your issue?
Edit
Okay so you modified your original question to say that you'd rather not use a watchdog thread because you are new to multithreading. I understand the fear of doing something a bit more advanced than you are comfortable with, but this might end up being the only solution. In that case, you might have a bit of reading to do. =)
And yeah, I know that something such as Photoshop pegging the crap out of the processor is a problem. Another (even more complicated) solution would be to, instead of having a watchdog thread, have a separate watchdog process that has top priority so it is a bit more immune to processor pegging. But again, this is getting really complicated.
Final Edit
I'm going to leave all my other ideas above for completeness' sake, but it seems that using the system's uptime will also be a valid way to deal with this. Since [[NSProcessInfo processInfo] systemUptime] only works in 10.6+, you can just call mach_absolute_time(). To get access to that function, just #include <mach/mach_time.h>. That should be the same value as returned by NSProcessInfo.
I figured out a way to do this using the UpTime() C function, provided in <CoreServices/CoreServices.h>. This returns Absolute Time (CPU-specific), which can easily be converted into Duration Time (milliseconds, or nanoseconds). Details here: http://www.meandmark.com/timingpart1.html (look under part 3 for UpTime)
I couldn't get mach_absolute_time() to work properly, likely due to my lack of knowledge on it, and not being able to find much documentation on the web about it. It appears to grab the same time as UpTime(), but converting it into a double left me dumbfounded.
[[NSApp currentEvent] timestamp] did work, but only if the application was receiving NSEvents. If the application went into the foreground, it wouldn't receive events, and [[NSApp currentEvent] timestamp] would simply continue to return the same old timestamp again and again in an NSTimer firing method, until the end-user decided to interact with the app again.
Thanks for all your help Marc and Mike! You both definitely sent me in the right direction leading to the answer. :)

how does the function time() tell the current time and even when the computer has been powered off earlier?

how we can work on timer deals with milliseconds (0.001) how we could divide second as we want ?? how we could we deal with the second itself ???
http://computer.howstuffworks.com/question319.htm
In your computer (as well as other
gadgets), the battery powers a chip
called the Real Time Clock (RTC) chip.
The RTC is essentially a quartz watch
that runs all the time, whether or not
the computer has power. The battery
powers this clock. When the computer
boots up, part of the process is to
query the RTC to get the correct time
and date. A little quartz clock like
this might run for five to seven years
off of a small battery. Then it is
time to replace the battery.
Your PC will have a hardware clock, powered by a battery so that it keeps ticking even while the computer is switched off. The PC knows how fast its clock runs, so it can determine when a second goes by.
Initially, the PC doesn't know what time it is (i.e. it just starts counting from zero), so it must be told what the current time is - this can be set in the BIOS settings and is stored in the CMOS, or can be obtained via the Internet (e.g. by synchronizing with the clocks at NIST).
Some recap, and some more info:
1) The computer reads the Real-Time-Clock during boot-up, and uses that to set it's internal clock
2) From then on, the computer uses it's CPU clock only - it does not re-read the RTC (normally).
3) The computer's internal clock is subject to drift - due to thermal instability, power fluctuations, inaccuracies in finding an exact divisor for seconds, interrupt latency, cosmic rays, and the phase of the moon.
4) The magnitude of the clock drift could be in the order of seconds per day (tens or hundreds of seconds per month).
5) Most computers are capable of connecting to a time server (over the internet) to periodically reset their clock.
6) Using a time server can increase the accuracy to within tens of milliseconds (normally). My computer updates every 15 minutes.
Computers know the time because, like you, they have a digital watch they look at from time to time.
When you get a new computer or move to a new country you can set that watch, or your computer can ask the internet what the time is, which helps to stop it form running slow, or fast.
As a user of the computer, you can ask the current time, or you can ask the computer to act as an alarm clock. Some computers can even turn themselves on at a particular time, to back themselves up, or wake you up with a favourite tune.
Internally, the computer is able to tell the time in milliseconds, microseconds or sometimes even nanoseconds. However, this is not entirely accurate, and two computers next to each other would have different ideas about the time in nanoseconds. But it can still be useful.
The computer can set an alarm for a few milliseconds in the future, and commonly does this so it knows when to stop thinking about your e-mail program and spend some time thinking about your web browser. Then it sets another alarm so it knows to go back to your e-mail a few milliseconds later.
As a programmer you can use this facility too, for example you could set a time limit on a level in a game, using a 'timer'. Or you could use a timer to tell when you should put the next frame of the animation on the display - perhaps 25 time a second (ie every 40 milliseconds).
To answer the main question, the BIOS clock has a battery on your motherboard, like Jian's answer says. That keeps time when the machine is off.
To answer what I think your second question is, you can get the second from the millisecond value by doing an integer division by 1000, like so:
second = (int) (milliseconds / 1000);
If you're asking how we're able to get the time with that accuracy, look at Esteban's answer... the quartz crystal vibrates at a certain time period, say 0.00001 seconds. We just make a circuit that counts the vibrations. When we have reached 100000 vibrations, we declare that a second has passed and update the clock.
We can get any accuracy by counting the vibrations this way... any accuracy thats greater than the period of vibration of the crystal we're using.
The motherboard has a clock that ticks. Every tick represents a unit of time.
To be more precise, the clock is usually a quartz crystal that oscilates at a given frequency; some common CPU clock frequencies are 33.33 and 40 MHz.
Absolute time is archaically measured using a 32-bit counter of seconds from 1970. This can cause the "2038 problem," where it simply overflows. Hence the 64-bit time APIs used on modern Windows and Unix platforms (this includes BSD-based MacOS).
Quite often a PC user is interested in time intervals rather than the absolute time since a profound event took place. A common implementation of a computer has things called timers that allow just that to happen. These timers might even run when the PC isn't with purpose of polling hardware for wake-up status, switching sleep modes or coming out of sleep. Intel's processor docs go into incredible detail about these.