How does reverse debugging work? - reverse-debugging

GDB has a new version out that supports reverse debug (see http://www.gnu.org/software/gdb/news/reversible.html). I got to wondering how that works.
To get reverse debug to work it seems to me that you need to store the entire machine state including memory for each step. This would make performance incredibly slow, not to mention using a lot of memory. How are these problems solved?

I'm a gdb maintainer and one of the authors of the new reverse debugging. I'd be happy to talk about how it works. As several people have speculated, you need to save enough machine state that you can restore later. There are a number of schemes, one of which is to simply save the registers or memory locations that are modified by each machine instruction. Then, to "undo" that instruction, you just revert the data in those registers or memory locations.
Yes, it is expensive, but modern cpus are so fast that when you are interactive anyway (doing stepping or breakpoints), you don't really notice it that much.

Note that you must not forget the use of simulators, virtual machines, and hardware recorders to implement reverse execution.
Another solution to implement it is to trace execution on physical hardware, such as is done by GreenHills and Lauterbach in their hardware-based debuggers. Based on this fixed trace of the action of each instruction, you can then move to any point in the trace by removing the effects of each instruction in turn. Note that this assumes that you can trace all things that affect the state visible in the debugger.
Another way is to use a checkpoint + re-execution method, which is used by VmWare Workstation 6.5 and Virtutech Simics 3.0 (and later), and which seems to be coming with Visual Studio 2010. Here, you use a virtual machine or a simulator to get a level of indirection on the execution of a system. You regularly dump the entire state to disk or memory, and then rely on the simulator being able to deterministically re-execute the exact same program path.
Simplified, it works like this: say that you are at time T in the execution of a system. To go to time T-1, you pick up some checkpoint from point t < T, and then execute (T-t-1) cycles to end up one cycle before where you were. This can be made to work very well, and apply even for workloads that do disk IO, consist of kernel-level code, and performs device driver work. The key is to have a simulator that contains the entire target system, with all its processors, devices, memories, and IOs. See the gdb mailinglist and the discussion following that on the gdb mailing list for more details. I use this approach myself quite regularly to debug tricky code, especially in device drivers and early OS boots.
Another source of information is a Virtutech white paper on checkpointing (which I wrote, in full disclosure).

During an EclipseCon session we also asked how they do this with the Chronon Debugger for Java. That one does not allow you to actually step back, but can play back a recorded program execution in such a way that it feels like reverse debugging. (The main difference is that you cannot change the running program in the Chronon debugger, while you can do that in most other Java debuggers.)
If I understood it correctly, it manipulates the byte code of the running program, such that every change of an internal state of the program is recorded. External states don't need to be recorded additionally. If they influence your program in some way, then you must have an internal variable matching that external state (and therefore that internal variable is enough).
During playback time they can then basically recreate every state of the running program from the recorded state changes.
Interestingly the state changes are much smaller than one would expect on first look. So if you have a conditional "if" statement, you would think that you need at least one bit to record whether the program took the then- or the else-statement. In many cases you can avoid even that, like in the case that those different branches contain a return value. Then it is enough to record only the return value (which would be needed anyway) and to recalculate the decision about the executed branch from the return value itself.

Although this question is old, most of the answers are too, and as reverse-debugging remains an interesting topic, I'm posting a 2015 answer. Chapters 1 and 2 of my MSc thesis, Combining reverse debugging and live programming towards visual thinking in computer programming, covers some of the historical approaches to reverse debugging (especially focused on the snapshot-(or checkpoint)-and-replay approach), and explains the difference between it and omniscient debugging:
The computer, having forward-executed the program up to some point, should really be able to provide us with information about it. Such an improvement is possible, and is found in what are called omniscient debuggers. They are usually classified as reverse debuggers, although they might more accurately be described as "history logging" debuggers, as they merely record information during execution to view or query later, rather than allow the programmer to actually step backwards in time in an executing program. "Omniscient" comes from the fact that the entire state history of the program, having been recorded, is available to the debugger after execution. There is then no need to rerun the program, and no need for manual code instrumentation.
Software-based omniscient debugging started with the 1969 EXDAMS system where it was called "debug-time history-playback". The GNU debugger, GDB, has supported omniscient debugging since 2009, with its 'process record and replay' feature. TotalView, UndoDB and Chronon appear to be the best omniscient debuggers currently available, but are commercial systems. TOD, for Java, appears to be the best open-source alternative, which makes use of partial deterministic replay, as well as partial trace capturing and a distributed database to enable the recording of the large volumes of information involved.
Debuggers that do not merely allow navigation of a recording, but are actually able to step backwards in execution time, also exist. They can more accurately be described as back-in-time, time-travel, bidirectional or reverse debuggers.
The first such system was the 1981 COPE prototype ...

mozilla rr is a more robust alternative to GDB reverse debugging
https://github.com/mozilla/rr
GDB's built-in record and replay has severe limitations, e.g. no support for AVX instructions: gdb reverse debugging fails with "Process record does not support instruction 0xf0d at address"
Upsides of rr:
much more reliable currently. I have tested it relatively long runs of several complex software.
also offers a GDB interface with gdbserver protocol, making it a great replacement
small performance drop for most programs, I haven't noticed it myself without doing measurements
the generated traces are small on disk because only very few non-deterministic events are recorded, I've never had to worry about their size so far
rr achieves this by first running the program in a way that records what happened on every single non-deterministic event such as a thread switch.
Then during the second replay run, it uses that trace file, which is surprisingly small, to reconstruct exactly what happened on the original non-deterministic run but in a deterministic way, either forwards or backwards.
rr was originally developed by Mozilla to help them reproduce timing bugs that showed up on their nightly testing the following day. But the reverse debugging aspect is also fundamental for when you have a bug that only happens hours inside execution, since you often want to step back to examine what previous state led to the later failure.
The following example showcases some of its features, notably the reverse-next, reverse-step and reverse-continue commands.
Install on Ubuntu 18.04:
sudo apt-get install rr linux-tools-common linux-tools-generic linux-cloud-tools-generic
sudo cpupower frequency-set -g performance
# Overcome "rr needs /proc/sys/kernel/perf_event_paranoid <= 1, but it is 3."
echo 'kernel.perf_event_paranoid=1' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Test program:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int f() {
int i;
i = 0;
i = 1;
i = 2;
return i;
}
int main(void) {
int i;
i = 0;
i = 1;
i = 2;
/* Local call. */
f();
printf("i = %d\n", i);
/* Is randomness completely removed?
* Recently fixed: https://github.com/mozilla/rr/issues/2088 */
i = time(NULL);
printf("time(NULL) = %d\n", i);
return EXIT_SUCCESS;
}
compile and run:
gcc -O0 -ggdb3 -o reverse.out -std=c89 -Wextra reverse.c
rr record ./reverse.out
rr replay
Now you are left inside a GDB session, and you can properly reverse debug:
(rr) break main
Breakpoint 1 at 0x55da250e96b0: file a.c, line 16.
(rr) continue
Continuing.
Breakpoint 1, main () at a.c:16
16 i = 0;
(rr) next
17 i = 1;
(rr) print i
$1 = 0
(rr) next
18 i = 2;
(rr) print i
$2 = 1
(rr) reverse-next
17 i = 1;
(rr) print i
$3 = 0
(rr) next
18 i = 2;
(rr) print i
$4 = 1
(rr) next
21 f();
(rr) step
f () at a.c:7
7 i = 0;
(rr) reverse-step
main () at a.c:21
21 f();
(rr) next
23 printf("i = %d\n", i);
(rr) next
i = 2
27 i = time(NULL);
(rr) reverse-next
23 printf("i = %d\n", i);
(rr) next
i = 2
27 i = time(NULL);
(rr) next
28 printf("time(NULL) = %d\n", i);
(rr) print i
$5 = 1509245372
(rr) reverse-next
27 i = time(NULL);
(rr) next
28 printf("time(NULL) = %d\n", i);
(rr) print i
$6 = 1509245372
(rr) reverse-continue
Continuing.
Breakpoint 1, main () at a.c:16
16 i = 0;
When debugging complex software, you will likely run up to a crash point, and then fall inside a deep frame. In that case, don't forget that to reverse-next on higher frames, you must first:
reverse-finish
up to that frame, just doing the usual up is not enough.
The most serious limitations of rr in my opinion are:
https://github.com/mozilla/rr/issues/2089 you have to do a second replay from scratch, which can be costly if the crash you are trying to debug happens, say, hours into execution
https://github.com/mozilla/rr/issues/1373 x86 only
UndoDB is a commercial alternative to rr: https://undo.io Both are trace / replay based, but I'm not sure how they compare in terms of features and performance.

Nathan Fellman wrote:
But does reverse debugging only allow you to roll back next and step commands that you typed, or does it allow you to undo any number of instructions?
You can undo any number of instructions. You're not restricted to, for instance,
only stopping at the points where you stopped when you were going forward. You can
set a new breakpoint and run backwards to it.
For instance, if I set a breakpoint on an instruction and let it run until then, can I then roll back to the previous instruction, even though I skipped over it?
Yes. So long as you turned on recording mode before you ran to the breakpoint.

Here is how another reverse-debugger called ODB works. Extract:
Omniscient Debugging is the idea of
collecting "time stamps" at each
"point of interest" (setting a value,
making a method call,
throwing/catching an exception) in a
program and then allowing the
programmer to use those time stamps to
explore the history of that program
run.
The ODB ... inserts
code into the program's classes as
they are loaded and when the program
runs, the events are recorded.
I'm guessing the gdb one works in the same kind of way.

Reverse debugging means you can run the program backwards, which is very useful to track down the cause of a problem.
You don't need to store the complete machine state for each step, only the changes. It is probably still quite expensive.

Related

STM32G0B1CE Can the Boot Option bits be used to jump to system bootloader?

I have seen that there are quite a few questions about jumping from an app to the ST system bootloader, for example this one. These use the method of setting the MSP and PC then doing the jump with a function pointer.
This seems to cause an issue with the system bootloader dual-bank management whereby the first jump fails and a second jump needs to be done.
My question is - would it be possible/better to use the user option bytes to jump to the bootloader instead?
Since the OB register is read during boot in the OBL phase, if we set both the "nBOOT1 bit" and "nBOOT_SEL bit" and clear the "nBOOT0 bit" then do a soft reset would this avoid the empty check weirdness and let us jump to the bootloader in one go?
(Just for context - this would be the first step of doing updates via CAN as the MCU in question has a CAN bootloader built in)
Thanks in advance!
After some time tinkering with a dev board and with some help from Tilen Majerle I found that this is indeed possible and does work well.
I added the following in my main() while(1) loop so that when the blue button is pressed, the user option bits are modified and a reset is performed.
I found that we don't have to do the soft reset ourselves as the HAL_FLASH_OB_Launch() function triggers the reset for us, after which we should boot into system memory according to the reference manual page 67.
Also I found that the flash and option bytes must be unlocked before setting the option bytes, but not locked afterwards or the reset won't occur.
Here is the code to do it:
if(HAL_GPIO_ReadPin(BUTTON_GPIO_Port, BUTTON_Pin) == GPIO_PIN_RESET)
{
// Basic de-bounce for testing
HAL_Delay(100);
while(HAL_GPIO_ReadPin(BUTTON_GPIO_Port, BUTTON_Pin) == GPIO_PIN_RESET)
{
__NOP();
}
// Read, modify & write user option bits
// nBOOT1 = 1, nBOOT_SEL = 1, nBOOT0 = 0; will select system memory as boot area
uint32_t optBits = FLASH->OPTR;
optBits = (optBits | FLASH_OPTR_nBOOT1 | FLASH_OPTR_nBOOT_SEL);
optBits &= ~(FLASH_OPTR_nBOOT0);
// Unlock flash
HAL_FLASH_Unlock();
// Clear OPTLOCK
HAL_FLASH_OB_Unlock();
// Set up struct with desired bits
FLASH_OBProgramInitTypeDef optionBytesSetting = {0};
optionBytesSetting.OptionType = OPTIONBYTE_USER;
optionBytesSetting.USERConfig = optBits;
optionBytesSetting.USERType = OB_USER_nBOOT0;
// Write Option Bytes
HAL_FLASHEx_OBProgram(&optionBytesSetting);
HAL_Delay(10);
// Soft reset
HAL_FLASH_OB_Launch();
NVIC_SystemReset(); // is not reached
}
I verified that the flash OPTR register is modified correctly (it goes from 0xFFFFFEAA to 0xFBFFFEAA, essentially just the nBOOT0 bit is cleared as the other two bits were already set). The MCU does reset at HAL_FLASH_OB_Launch() as expected and pausing the program reveals that after reset it is running the system bootloader based on the PC address.
I also verified it using STM32CubeProgrammer which allows me to view the PC and option bytes, plus lets me set nBOOT0 back to 1 and boot the board to my app.
As for reverting the OB settings programmatically, you could either use the Write Memory command before jumping to the app, or you could use the Go command to jump to the app then modify the option bytes first thing in your app.

How to break the gem5 executable in GDB at a the nth instruction?

Using --debug-flags ExecAll tracing, I found that there is a bug at the Nth instruction, which happens at the Nth line of the log.
Is there an easy way to break specifically at that instruction to debug it in GDB and view gem5's internal state?
The simplest approach is to use --debug-break as shown at: schedBreak(<tick>) gdb debugging function not working
That makes gem5 raise a signal at a given simulation, which GDB stops at by default. You can determine what simulation time corresponds to your instruction by looking at an --debug-flags ExecAll trace beforehand.
You will want to break on the tick much more often than on the Nth instructions, in particular since gem5 simulates the instruction pipeline, and therefor there can be multiple instructions in flight at the same time.
Alternatively, from GDB your point of interest sees the ExecutionContext object, which if often called xc, you can just add a conditional breakpoint like:
b MyClass::myFunction if xc->numInsts.data()->value() == <n> - 2
The -2 is needed because this index is zero based, and because the tick increments after instruction execution.
You can also find the tick time rather than instruction count with:
p xc->cpu->tick
or from the other commonly available ThreadContext object with:
p tc->baseCpu->tick
You generally want to do this from the ::tick() function of your CPU model of interest.
For AtomicSimpleCPU::tick() you could also break just before the second instruction with:
b AtomicSimpleCPU::tick if (*threadInfo[curThread]).numInst == 1
Or to break at a given tick, say 1000 (500 is the one before it):
b AtomicSimpleCPU::tick if tick == 500
Two other important break locations are at the main event loop when an event is executed:
b EventQueue::serviceOne() if head->when() == 1000
and the event scheduling target point:
b EventQueue::schedule if when == <target-time>
b EventQueue::reschedule if when == <target-time>
or for the time of schedule itself:
b EventQueue::schedule if _curTick == 1000
b EventQueue::reschedule if _curTick == 1000
Together with reverse debugging and:
--debug-flags Event
these event breakpoints will actually allow you to understand what gem5 is doing.
Note however that conditional breakpoints significantly slow down simulation unfortunately... arghh.
Another useful technique to have in mind is that you can do a run that stops shortly after the point of interest with:
-m <tick>
and then reverse debug back to the exact point of interest, possibly conditionally since now you will be close the the point of interest, so the performance loss will not be a huge problem. You can then just continue going back to the root cause.
Tested in gem5 9f247403e558977738b5911a45e5776afff87b1a.

PLC Object Oriented Programming - Using methods

I'm writing a program for a Schneider PLC using structured text, and I'm trying to do it using object oriented programming.
Being a newbie in PLC programming, I wrote a simple test program such a this:
okFlag:=myObject.aMethod();
IF okFlag THEN
// it's ok, go on
ELSE
// error handling
END_IF
aMethod must perform some operations, wait for the result (there is a "time-out" check to avoid deadlocks) and return TRUE or FALSE
This is what I expected during program execution
1) when the okFlag:=myObject.aMethod(); is reached, the code inside aMethod is executed until a result is returned. When I say "executed" I mean that in the next scan cycle the execution of aMethodcontinues from the point it had reached before.
2) the result of method calling is checked and the main flow of the program is executed
and this is what happens:
1) aMethod is executed but the program flow continues. That is, when it reaches the end of aMethod a value it's returned, even if the events that aMethod should wait for are still executing.
2) on the next cycle, aMethod is called again and restarts from the beginning
This is the first solution I found:
VAR_STATIC
imBusy: BOOL
END_VAR
METHOD aMethod: INT;
IF NOT(imBusy) THEN
imBusy:=FALSE;
aMethod:=-1; // result of method while in progress
ELSE
aMethod:=-1;
<rest of code. If everything is ok, the result is 0, otherwise is 1>
END_IF
imBusy:=aMethod<0;
and the main program:
CASE (myObject.aMethod()) OF
0: // it's ok, go on
1: // error handling
ELSE
// still executing...
END_CASE
and this seems to work, but I don't know if it's the right approach.
There are some libraries from Schneider which use methods that return boolean and seem to work as I expected in my program. That is: when the cycle reaches the call to method for the first time the program flow is "deviated" somehow so that in the next cycle it enters again the method until it's finished. It's there a way to have this behaviour ?
generally OOP isn't the approach that people would take when using IEC61131 languages. Your best bet is probably to implement your code as a state machine. I've used this approach in the past as a way of simplifying a complex sequence so that it is easier for plant maintainers to interpret.
Typically what I would recommend if you are going to take this approach is to try to segregate your state machine itself from your working code; you can implement a state machine of X steps, and then have your working code reference the statemachine step.
A simple example might look like:
stepNo := 0;
IF (start AND stepNo = 0) THEN
StepNo = 1;
END_IF;
(* there's a shortcut unity operation for resetting this array to zeroes which is faster, but I can't remember it off the top of my head... *)
ActiveStepArray := BlankStepArray;
IF stepNo > 0 THEN
IF StepComplete[stepNo] THEN
stepNo := stepNo +1;
END_IF;
ActiveStepArray[stepNo] := true;
END_IF;
Then in other code sections you can put...
IF ActiveStep[1] THEN
(* Do something *)
StepComplete[1] := true;
END_IF;
IF ActiveStep[2] THEN
(* Do Something *)
StepComplete[2] := true;
END_IF;
(* etc *)
The nice thing about this approach is that you can actually put all of the state machine code (including jumps, resets etc) into a DFB, test it and then shelve it, and then just use the active step, step complete, and any other inputs you require.
Your code is still always going to execute an entire section of logic, but if you really want to avoid that then you'll have to use a lot of IF statements, which will impede readability.
Hope that helps.
Why not use SFC it makes your live easier in many cases, since it is state machine language itself. Do subprogram, wait condition do another .. rince and repeat. :)
Don't hang just for ST, the other IEC languages are better in some other tasks and keep thing as clear as possible. There should be not so much "this is my cake" mentality on the industrial PLC programming circles as it is on the many other programming fields, since application timeline can be 40 years and you left the firm 20 years ago to better job and programs are almost always location/customer or atleast hardware specific.
http://www.automation.com/pdf_articles/IEC_Programming_Thayer_L.pdf

Creating robust real-time monitors for variables

We can create a real-time monitor for a variable like this:
CreatePalette#Panel#Row[{"x = ", Dynamic[x]}]
(This is more interesting and useful if x happens to be something like $Assumptions. It's so easy to set a value and then forget about it.)
Unfortunately this stops working if the kernel is re-launched (Quit[], then evaluate something). The palette won't show changes in the value of x any more.
Is there a way to do this so it keeps working even across kernel sessions? I find myself restarting the kernel quite often. (If the resulting palette causes the kernel to be automatically started after Quit that's fine.)
Update: As mentioned in the comments, it turns out that the palette ceases working only if we quit by evaluating Quit[]. When using Evaluation -> Quit Kernel -> Local, it will keep working.
Link to same question on MathGroup.
I can only guess, because on my Ubuntu here the situations seems buggy. The trick with the Quit from the menu like Leonid suggested did not work here. Another one is: on a fresh Mathematica session with only one notebook open:
Dynamic[x]
x = 1
Dynamic[x]
x = 2
gives as expected
2
1
2
2
Typing in the next line Quit, evaluating and typing then x=3 updates only the first of the Dynamic[x].
Nevertheless, have you checked the command
Internal`GetTrackedSymbols[]
This gives not only the tracked symbols but additionally some kind of ID where the dynamic content belongs. If you can find out, what exactly these numbers are and investigate in the other functions you find in the Internal context, you may be able to add your palette Dynamic-content manually after restarting the kernel.
I thought I had something like that with
Internal`SetValueTrackExtra
but I'm currently not able to reproduce the behavior.
#halirutan's answer jarred my memory...
Have you ever come across: Experimental/ref/ValueFunction? (documentation address)
Although the documentation contains no examples, the 'more information' section provides the following tidbit:
The assignment ValueFunction[symb] = f specifies that whenever
symb gets a new value val, the expression f[symb,val] should be
evaluated.

Keeping time using timer interrupts an embedded microcontroller

This question is about programming small microcontrollers without an OS. In particular, I'm interested in PICs at the moment, but the question is general.
I've seen several times the following pattern for keeping time:
Timer interrupt code (say the timer fires every second):
...
if (sec_counter > 0)
sec_counter--;
...
Mainline code (non-interrupt):
sec_counter = 500; // 500 seconds
while (sec_counter)
{
// .. do stuff
}
The mainline code may repeat, set the counter to various values (not just seconds) and so on.
It seems to me there's a race condition here when the assignment to sec_counter in the mainline code isn't atomic. For example, in PIC18 the assignment is translated to 4 ASM statements (loading each byte at the time and selecting the right byte from the memory bank before that). If the interrupt code comes in the middle of this, the final value may be corrupted.
Curiously, if the value assigned is less than 256, the assignment is atomic, so there's no problem.
Am I right about this problem?
What patterns do you use to implement such behavior correctly? I see several options:
Disable interrupts before each assignment to sec_counter and enable after - this isn't pretty
Don't use an interrupt, but a separate timer which is started and then polled. This is clean, but uses up a whole timer (in the previous case the 1-sec firing timer can be used for other purposes as well).
Any other ideas?
The PIC architecture is as atomic as it gets. It ensures that all read-modify-write operations to a memory file are 'atomic'. Although it takes 4-clocks to perform the entire read-modify-write, all 4-clocks are consumed in a single instruction and the next instruction uses the next 4-clock cycle. It is the way that the pipeline works. In 8-clocks, two instructions are in the pipeline.
If the value is larger than 8-bit, it becomes an issue as the PIC is an 8-bit machine and larger operands are handled in multiple instructions. That will introduce atomic issues.
You definitely need to disable the interrupt before setting the counter. Ugly as it may be, it is necessary. It is a good practice to ALWAYS disable the interrupt before configuring hardware registers or software variables affecting the ISR method. If you are writing in C, you should consider all operations as non-atomic. If you find that you have to look at the generated assembly too many times, then it may be better to abandon C and program in assembly. In my experience, this is rarely the case.
Regarding the issue discussed, this is what I suggest:
ISR:
if (countDownFlag)
{
sec_counter--;
}
and setting the counter:
// make sure the countdown isn't running
sec_counter = 500;
countDownFlag = true;
...
// Countdown finished
countDownFlag = false;
You need an extra variable and is better to wrap everything in a function:
void startCountDown(int startValue)
{
sec_counter = 500;
countDownFlag = true;
}
This way you abstract the starting method (and hide ugliness if needed). For example you can easily change it to start a hardware timer without affecting the callers of the method.
Write the value then check that it is the value required would seem to be the simplest alternative.
do {
sec_counter = value;
} while (sec_counter != value);
BTW you should make the variable volatile if using C.
If you need to read the value then you can read it twice.
do {
value = sec_counter;
} while (value != sec_counter);
Because accesses to the sec_counter variable are not atomic, there's really no way to avoid disabling interrupts before accessing this variable in your mainline code and restoring interrupt state after the access if you want deterministic behavior. This would probably be a better choice than dedicating a HW timer for this task (unless you have a surplus of timers, in which case you might as well use one).
If you download Microchip's free TCP/IP Stack there are routines in there that use a timer overflow to keep track of elapsed time. Specifically "tick.c" and "tick.h". Just copy those files over to your project.
Inside those files you can see how they do it.
It's not so curious about the less than 256 moves being atomic - moving an 8 bit value is one opcode so that's as atomic as you get.
The best solution on such a microcontroller as the PIC is to disable interrupts before you change the timer value. You can even check the value of the interrupt flag when you change the variable in the main loop and handle it if you want. Make it a function that changes the value of the variable and you could even call it from the ISR as well.
Well, what does the comparison assembly code look like?
Taken to account that it counts downwards, and that it's just a zero compare, it should be safe if it first checks the MSB, then the LSB. There could be corruption, but it doesn't really matter if it comes in the middle between 0x100 and 0xff and the corrupted compare value is 0x1ff.
The way you are using your timer now, it won't count whole seconds anyway, because you might change it in the middle of a cycle.
So, if you don't care about it. The best way, in my opinion, would be to read the value, and then just compare the difference. It takes a couple of OPs more, but has no multi-threading problems.(Since the timer has priority)
If you are more strict about the time value, I would automatically disable the timer once it counts down to 0, and clear the internal counter of the timer and activate once you need it.
Move the code portion that would be on the main() to a proper function, and have it conditionally called by the ISR.
Also, to avoid any sort of delaying or missing ticks, choose this timer ISR to be a high-prio interrupt (the PIC18 has two levels).
One approach is to have an interrupt keep a byte variable, and have something else which gets called at least once every 256 times the counter is hit; do something like:
// ub==unsigned char; ui==unsigned int; ul==unsigned long
ub now_ctr; // This one is hit by the interrupt
ub prev_ctr;
ul big_ctr;
void poll_counter(void)
{
ub delta_ctr;
delta_ctr = (ub)(now_ctr-prev_ctr);
big_ctr += delta_ctr;
prev_ctr += delta_ctr;
}
A slight variation, if you don't mind forcing the interrupt's counter to stay in sync with the LSB of your big counter:
ul big_ctr;
void poll_counter(void)
{
big_ctr += (ub)(now_ctr - big_ctr);
}
No one addressed the issue of reading multibyte hardware registers (for example a timer.
The timer could roll over and increment its second byte while you're reading it.
Say it's 0x0001ffff and you read it. You might get 0x0010ffff, or 0x00010000.
The 16 bit peripheral register is volatile to your code.
For any volatile "variables", I use the double read technique.
do {
t = timer;
} while (t != timer);