Canonical way(s) of determining system time in a microcontroller - embedded

Every so often I start a bare metal microcontroller project and end up implementing a system time measurement using a random timer unit.
I am working with ARM Cortex-M devices for a (albeit short) while now and typically used the SysTick ("System Tick") interrupt to create a 1ms resolution timer. It recently stumbled over a post that suggested chaining two Programmable Interrupt Timers (on a Kinetis KL25Z device) in order to create an interrupt-less 32bit millisecond timer, however sacrificing two PIT interrupts which may come in handy later on.
So I was wondering if there are some (sort of) canonical ways to determine the system time on a microcontroller - preferrably for Kinetis KL2xZ devices as I currently work with these, but not necessarily so.

The canonical method as you put it is exactly as you have done - using systick. That is the single timer device defined by the Cortex-M architecture; any other timer hardware is external to the core and vendor specific.
Some parts (STM32F2 for example) include 32 bit timer/counter hardware, so you would not need to chain two.
The best approach is to abstract timer services by defining a generic timer API that you implement for all parts you need so that the application layer is identical for all parts. For example in this case you might simply implement the standard library clock() function and define CLOCKS_PER_SEC.
If you are using two free-running cascaded timers, you must ensure high/low word consistency when combining the two counter values:
#include <time.h>
clock_t clock( void )
{
uint16_t low_word = 0 ;
uint16_t hi_word = 0 ;
do
{
hi_word = readTimerH() ;
lo_word = readTimerL() ;
} while( hi_word != readTimerH() ) ;
return (clock_t)(hi_word << 16 | lo_word) ;
}

I just looked into KL25 Sub-Family Reference Manual.
In Chapter 34 Real Time Clock (RTC) section 34.3.2 Time counter (may differ with document version).
I found that there are Two registers for Timer counter in RTC
32-bit seconds counter
16-bit prescaler register that increments once every 32.768 kHz clock cycle
Reference Manual says
Always write to the prescaler register before writing to the seconds register,
because the seconds register increments on the falling edge of bit 14 of the prescaler
register.
Which means to calculate system time, read rtc_sec_counter and add 14 bits of prescalar_reg
you can even create a macro to give you system time in uSec and mSec from combination of rtc_sec_counter and prescalar_reg or Sec(obviously from rtc_sec_counter)
For 16 bit prescalar REG System clock is 32.768 Khz, with this we can create macros to get time in uSec and mSec
#define PRESCALAR_TICK 32768
#define KHZ 1000
#define MHZ 1000000
/// Here first we extract 14bit value of prescalar_reg and than multiply it with MHZ to get better precision
/// but this value will not go more than 14 Bit
#define GET_SYS_US ((((prescalar_reg & 0x03FFF)*MHZ)/PRESCALAR_TICK))
#define GET_SYS_MS (GET_SYS_US)/KHZ)
if you need time in milliseconds up to 32 bit use below macro
#define GET_SYS_US_32bit ((rtc_sec_counter * 0x3FFF) + GET_SYS_US)
#define GET_SYS_MS_32bit ((rtc_sec_counter * 0x3FFF) + GET_SYS_MS)
But to use these information you must initialise RTC of you micro (Obviously)

Related

Can't get the analogue watchdog to trigger an interrupt on the DFSDM peripheral of a STM32L475

I have an AMC1306 current shunt modulator feeding 1-bit PDM data at 10 MHz into a STM32L475. Filter0 takes the bit stream from Channel0 and applies a sinc3 filter with Fosr=125 and Iosr=4. This provides 24-bit data at 20 kHz and is working fine. The DMA transfers the data into a 1-word circular buffer in main memory to maintain fresh data.
I want to be able to call an interrupt function if the 24-bit value leaves a certain window. This would be caused in an over-voltage situation and needs to disengage the MOSFET driver. It would seem this functionality is offered by the analogue watchdog within the peripheral.
I am using STM32CubeIDE and the graphical interface within the IDE to configure the peripherals. Filter0 global interrupts are enabled. I have added this code:
/* USER CODE BEGIN 2 */
HAL_DFSDM_FilterRegularStart_DMA(&hdfsdm1_filter0, Vbus_DMA, 1);
// Set up the watchdog
DFSDM_Filter_AwdParamTypeDef awdParamFilter0;
awdParamFilter0.DataSource = DFSDM_FILTER_AWD_FILTER_DATA;
awdParamFilter0.Channel = DFSDM_CHANNEL_0;
awdParamFilter0.HighBreakSignal = DFSDM_NO_BREAK_SIGNAL;
awdParamFilter0.HighThreshold = 250;
awdParamFilter0.LowBreakSignal = DFSDM_NO_BREAK_SIGNAL;
awdParamFilter0.LowThreshold = -250;
HAL_DFSDM_FilterAwdStart_IT(&hdfsdm1_filter0, &awdParamFilter0);
/* USER CODE END 2 */
I have also used the HAL callback function
/* USER CODE BEGIN 4 */
void HAL_DFSDM_FilterAwdCallback(DFSDM_Filter_HandleTypeDef *hdfsdm_filter, uint32_t Channel, uint32_t Threshold)
{
HAL_GPIO_WritePin(GPIOA, LED_Pin, GPIO_PIN_SET);
}
/* USER CODE END 4 */
But the callback function never runs! I have experimented with the thresholds (I even made them zero).
In the debugger I can see the AWDIE=0x1 (So the AWD interrupt is enabled). The AWDF = 0x1 (So the threshold has been crossed and the peripheral should be requesting an interrupt...). The code doesn't even trigger a breakpoint in the stm32l4xx_it.c filter0 interrupt. So it'd seem no DFSDM1_FLT0 interrupts are happening
I'd be enormously appreciative of any help, any example code, any resources to read. Thanks in advance.
I know the DMA conversion complete callbacks work
I have played around with various thresholds and note that the AWDF gets set when the threshold is crossed.

How to make Timer1 more accurate as a real time clock?

I have PIC18F87J11 with 8 MHz oscillator and I am using timer1 as real time clock. At this moment I have it toggle an LED every 1 minute. I noticed it does work perfect fine the first few times but slowly it starts toggling the LED every 59 seconds. Then every few minutes it keeps going down to 58, 57, etc. I don't know if its impossible to get an accurate clock using internal oscillator or if I need external oscillator. My settings look right for timer1, I just hope I can resolve this issue with the current hardware.
Prescaler 1:8, TMR1 Preload = 15536, Actual Interrupt Time : 200 ms
// Timer 1 Settings
RCONbits.IPEN = 1; // Enable interrupt system priority feature
INTCONbits.GIEL = 1; // Enable low priority interrupts
// 1:8 prescalar
T1CONbits.T1CKPS1 = 1;
T1CONbits.T1CKPS0 = 1;
// Use Internal Clock
T1CONbits.TMR1CS = 0;
// Timer1 overflow interrupt
PIE1bits.TMR1IE = 1;
IPR1bits.TMR1IP = 0; // Timer 1 -> Low priority interrupt group
PIE1bits.TMR1IE = 1; // Enable Timer1 interrupt
// TMR1 Preload = 15536;
TMR1H = 0x3C;
TMR1L = 0xB0;
Interrupt Routine
void interrupt low_priority lowISR(void) {
if (PIR1bits.TMR1IF == 1) {
oneSecond++;
if (oneSecond == 5) {
minute_Counter++;
if (minute_Counter >= 60) {
// One minute passed
Printf("\r\n One minute Passed");
ToggleLed();
minute_Counter = 0;
}
oneSecond = 0;
}
// TMR1 Preload = 15536;
TMR1H = 0x3C;
TMR1L = 0xB0;
PIR1bits.TMR1IF = 0;
}}
The internal oscillator is a simple RC oscilator (a resistor/capacitor time constant determines its frequency), this kind of circuit may be accurate to only +/-10% over the operating temperature range of the device, and the device will be self-heating due to normal operating power dissipation.
From the data sheet:
An external crystal or other accurate external clock source is required to get accurate timing. Alternatively, if you have some other stable and accurate, but low frequency clock source, such as output from an RTC with a 38768 Hz crystal, you can use that to calibrate the internal RC oscillator and dynamically adjust it with the OSCTUNE register - by using a timer gated by the low frequency source, you can determine the actual frequency of INTOSC and adjust accordingly - it will not be perfect, but it will be better - but no better than the precision of the calibrating source of course.
Some devices have a die temperature sensor that can also be used to compensate, but that is not available on your device.
The RC error can cause serial communications mistiming to the extent that you cannot communicate with a device using asynchronous (UART) serial comms.
There are some stuff in the datasheet you linked, "2.5.3 INTERNAL OSCILLATOR OUTPUT FREQUENCY AND TUNING", on p38
The datasheet says that
The INTOSC frequency may drift as VDD or temperature changes".
Are VDD and temperature stable ?
It notes three ways to deal with this by tuning the OSCTUNE register. The three of them would need an external "oscillator" :
dealing with errors of EUSART...this signal should come from somewhere.
a peripheral clock
cpp module in capture mode. You may use any stable AC signal as input.
Good luck !
Reload the Timer as soon as it expires, the delay between timer overflow and rearm is affecting the total time. So this will solve your problem.
void interrupt low_priority lowISR(void)
{
if (PIR1bits.TMR1IF)
{
PIR1bits.TMR1IF = 0;
TMR1H = 0x3C;
TMR1L = 0xAF;
/* rest of the code here */
. . . .
}
}
One more recommendation is not to load up the isr, keep it simple.
For all timing, time and frequency applications the first and most important thing to do is to CALIBRATE THE CRYSTAL OSCILLATOR!!! The oscillator itself and its crystal MUST run exactly (to better than 1 part per million = 1ppm) of its nominal frequency. Crystals straight out of a factory (except some very specialized and expensive ones = 100's of $) are not running exactly at their nominal frequency. If the calibration is not done, all time and frequency related functions will be off, because the oscillator frequency is used as reference for all PICs internal functions. The calibration must be done against an accurate frequency counter by adjusting one of the capacitors from crystal pins to ground. Any processor routines for frequency (and time) calibration are not accurate enough.

Use of timer interrupts in arduino stop the serial library functions why?

I am working with an arduino project.I am using timer interrupts and Serial communication.But as soon as the timer interrupts enables arduino Serial library functions are not working.I am stuck with this problem. Is there any way to do this. I want to use both Serial communication and timer interrupts.Use of the following function stops the Serial communication
void initialize()
{
//timer0
TIMSK0 = 2;
OCR0A = 125;
TCCR0A = 0b00000010; //commenting TCCR0A = 0b00000010; and TIMSK1 = 1 ; enable
TCCR0B = 0b00000011; // the serial communications
//timer1
TCCR1B = 1 ;
TIMSK1 = 1 ;
//timer2
TCCR2A = _BV(COM2A0) | _BV(WGM21) | _BV(WGM20);
TCCR2B = _BV(WGM22) | _BV(CS20);
OCR2A = B11000111;
EICRA = 63 ;
EIMSK = (1 << INT0) | (1 << INT1);
}
I would avoid using Timer0, directly. As it will mess with Arduino Core Libraries, as you are seeing.
On initial glance I would suggest using a proven library such as SimpleTimer(). It will setup and manage multiple events where its "run" basically pulls the millis() from timer 0. But read farther down.
I recall that Timer0 is setup by the core library to overrun at 1K creating interrupt. Where the micros() function read the value within timer0 between millisecond interrupts.
And for using Timer1 you can try TimerOne() library. There are also TimerTwo, 3 and etc.. out there.
You may want to read through Ken Shirriff's Arduino-IRremote library. As it does much of what you want, in discrete methods. Such as the 40Khz PWM. Rather than depending upon other libraries. Where his original library uses a
USECPERTICK 50 // microseconds per clock interrupt tick
to read and sample the receive input from a IR demodulator, as to decode the frames.
I would also point out microtherion's fork of the library, as it uses pin change interrupts to get more accurate pin changes. Where his library, again discretely manages these interrupts.
Were as one could use PinChangeInt Library to setup your implementation. Where the individual pin changes' ISR could capture the time stamp almost immediately. Minus latency where in this case is much less the 50ms desired resolution.
And if you really needed more resolution you can use the Input Capture Function. As demonstrated in InputCapture.ino. Which will capture the time of transition in real-time and generate an ISR for latent handling.
From these examples you should be able to implement your ultra sonic sensor.
I had the same problem, so i suggest to:
Use the TimerOne() library.
Use flags in the timer, so you can control when the time that you
programed has past.
In the loop function, you should use only the Serial.available(),
so the time would be as close as possible to what you want.
Dont write too much code in the loop function and control the
sensors reading with a switch or if function.
Its not the best solution, but it works. You have to be careful with the time programed, it should be higher than the the time expended in the data reading.

Monotonic clock on OSX

CLOCK_MONOTONIC does not seem available, so clock_gettime is out.
I've read in some places that mach_absolute_time() might be the right way to go, but after reading that it was a 'cpu dependent value', it instantly made me wonder if it is using rtdsc underneath. Thus, the value could drift over time even if it is monotonic. Also, issues with thread affinity could result in meaningfully different results from calling the function (making it not monotonic across all cores).
Of course, that is just speculation. Does anyone know how mach_absolute_time actually works? I'm actually looking for a replacement to clock_gettime(CLOCK_MONOTONIC... or something like it for OSX. No matter what the clock source is, I expect at least millisecond precision and millisecond accuracy.
I'd just like to understand what clocks are available, which clocks are monotonic, if certain clocks drift, have thread affinity issues, aren't supported on all Mac hardware, or take a 'super high' number of cpu cycles to execute.
Here are the links I was able to find about this topic (some are already dead links and not findable on archive.org):
https://developer.apple.com/library/mac/#qa/qa1398/_index.html
http://www.wand.net.nz/~smr26/wordpress/2009/01/19/monotonic-time-in-mac-os-x/
http://www.meandmark.com/timing.pdf
Thanks!
Brett
The Mach kernel provides access to system clocks, out of which at least one (SYSTEM_CLOCK) is advertised by the documentation as being monotonically incrementing.
#include <mach/clock.h>
#include <mach/mach.h>
clock_serv_t cclock;
mach_timespec_t mts;
host_get_clock_service(mach_host_self(), SYSTEM_CLOCK, &cclock);
clock_get_time(cclock, &mts);
mach_port_deallocate(mach_task_self(), cclock);
mach_timespec_t has nanosecond precision. I'm not sure about the accuracy, though.
Mac OS X supports three clocks:
SYSTEM_CLOCK returns the time since boot time;
CALENDAR_CLOCK returns the UTC time since 1970-01-01;
REALTIME_CLOCK is deprecated and is the same as SYSTEM_CLOCK in its current implementation.
The documentation for clock_get_time says the clocks are monotonically incrementing unless someone calls clock_set_time. Calls to clock_set_time are discouraged as it could break the monotonic property of the clocks, and in fact, the current implementation returns KERN_FAILURE without doing anything.
After looking up a few different answers for this I ended up defining a header which emulates clock_gettime on mach:
#include <sys/types.h>
#include <sys/_types/_timespec.h>
#include <mach/mach.h>
#include <mach/clock.h>
#ifndef mach_time_h
#define mach_time_h
/* The opengroup spec isn't clear on the mapping from REALTIME to CALENDAR
being appropriate or not.
http://pubs.opengroup.org/onlinepubs/009695299/basedefs/time.h.html */
// XXX only supports a single timer
#define TIMER_ABSTIME -1
#define CLOCK_REALTIME CALENDAR_CLOCK
#define CLOCK_MONOTONIC SYSTEM_CLOCK
typedef int clockid_t;
/* the mach kernel uses struct mach_timespec, so struct timespec
is loaded from <sys/_types/_timespec.h> for compatability */
// struct timespec { time_t tv_sec; long tv_nsec; };
int clock_gettime(clockid_t clk_id, struct timespec *tp);
#endif
and in mach_gettime.c
#include "mach_gettime.h"
#include <mach/mach_time.h>
#define MT_NANO (+1.0E-9)
#define MT_GIGA UINT64_C(1000000000)
// TODO create a list of timers,
static double mt_timebase = 0.0;
static uint64_t mt_timestart = 0;
// TODO be more careful in a multithreaded environement
int clock_gettime(clockid_t clk_id, struct timespec *tp)
{
kern_return_t retval = KERN_SUCCESS;
if( clk_id == TIMER_ABSTIME)
{
if (!mt_timestart) { // only one timer, initilized on the first call to the TIMER
mach_timebase_info_data_t tb = { 0 };
mach_timebase_info(&tb);
mt_timebase = tb.numer;
mt_timebase /= tb.denom;
mt_timestart = mach_absolute_time();
}
double diff = (mach_absolute_time() - mt_timestart) * mt_timebase;
tp->tv_sec = diff * MT_NANO;
tp->tv_nsec = diff - (tp->tv_sec * MT_GIGA);
}
else // other clk_ids are mapped to the coresponding mach clock_service
{
clock_serv_t cclock;
mach_timespec_t mts;
host_get_clock_service(mach_host_self(), clk_id, &cclock);
retval = clock_get_time(cclock, &mts);
mach_port_deallocate(mach_task_self(), cclock);
tp->tv_sec = mts.tv_sec;
tp->tv_nsec = mts.tv_nsec;
}
return retval;
}
Just use Mach Time.
It is public API, it works on macOS, iOS, and tvOS and it works from within the sandbox.
Mach Time returns an abstract time unit that I usually call "clock ticks". The length of a clock tick is system specific and depends on the CPU. On current Intel systems a clock tick is in fact exactly one nanosecond but you cannot rely on that (may be different for ARM and it certainly was different for PowerPC CPUs). The system can also tell you the conversion factor to convert clock ticks to nanoseconds and nanoseconds to clock ticks (this factor is static, it won't ever change at runtime). When your system boots, the clock starts at 0 and then monotonically increases with every clock tick thereafter, so you can also use Mach Time to get the uptime of your system (and, of course, uptime is monotonic!).
Here's some code:
#include <stdio.h>
#include <inttypes.h>
#include <mach/mach_time.h>
int main ( ) {
uint64_t clockTicksSinceSystemBoot = mach_absolute_time();
printf("Clock ticks since system boot: %"PRIu64"\n",
clockTicksSinceSystemBoot
);
static mach_timebase_info_data_t timebase;
mach_timebase_info(&timebase);
// Cast to double is required to make this a floating point devision,
// otherwise it would be an interger division and only the result would
// be converted to floating point!
double clockTicksToNanosecons = (double)timebase.numer / timebase.denom;
uint64_t systemUptimeNanoseconds = (uint64_t)(
clockTicksToNanosecons * clockTicksSinceSystemBoot
);
uint64_t systemUptimeSeconds = systemUptimeNanoseconds / (1000 * 1000 * 1000);
printf("System uptime: %"PRIu64" seconds\n", systemUptimeSeconds);
}
You can also put a thread to sleep until a certain Mach Time has been reached. Here's some code for that:
// Sleep for 750 ns
uint64_t machTimeNow = mach_absolute_time();
uint64_t clockTicksToSleep = (uint64_t)(750 / clockTicksToNanosecons);
uint64_t machTimeIn750ns = machTimeNow + clockTicksToSleep;
mach_wait_until(machTimeIn750ns);
As Mach Time has no relation to any wallclock time, you can play around with your system date and time setting as you like, that won't have any effect on Mach Time.
There's one special consideration, though, that may make Mach Time unsuitable for certain use cases: The CPU clock is not running while your system is asleep! So if you make a thread wait for 5 minutes and after 1 minute the system goes to sleep and stays asleep for 30 minutes, the thread is still waiting another 4 minutes after the system has woken up as the 30 minutes sleep time don't count! The CPU clock was resting as well during that time. Yet in other cases this is exactly what you want to happen.
Mach Time is also a very precise way to measure time spent. Here's some code showing that task:
// Measure time
uint64_t machTimeBegin = mach_absolute_time();
sleep(1);
uint64_t machTimeEnd = mach_absolute_time();
uint64_t machTimePassed = machTimeEnd - machTimeBegin;
uint64_t timePassedNS = (uint64_t)(
machTimePassed * clockTicksToNanosecons
);
printf("Thread slept for: %"PRIu64" ns\n", timePassedNS);
You will see that the thread doesn't sleep for exactly one second, that's because it takes some time to put a thread to sleep, to wake it back up again and even when awake, it won't get CPU time immediately if all cores are already busy running a thread at that moment.
Update (2018-09-26)
Since macOS 10.12 (Sierra) there also exists mach_continuous_time. The only difference between mach_continuous_time and mach_absolute_time is that continues time also advances when the system is asleep. So in case this was a problem so far and a reason for not using Mach Time, 10.12 and up offer a solution to this problem. The usage is exactly the same as described above.
Also starting with macOS 10.9 (Mavericks), there is a mach_approximate_time and in 10.12 there's also a mach_continuous_approximate_time. These two are identical to mach_absolute_time and mach_continuous_time with the only difference, that they are faster yet less accurate. The standard functions require a call into the kernel as the kernel takes care of Mach Time. Such a call is somewhat expensive, especially on systems that already have a Meltdown fix. The approximate versions won't have to always call into the kernel. They use a clock in user space that is only synchronized with the kernel clock from time to time to prevent that it is running too far out of sync, yet a small deviation is always possible and thus it is only the "approximate" Mach Time.

Precisely time a function call

I am using a microcontroller with a C51 core. I have a fairly timeconsuming and large subroutine that needs to be called every 500ms. An RTOS is not being used.
The way I am doing it right now is that I have an existing Timer interrupt of 10 ms. I set a flag after every 50 interrupts that is checked for being true in the main program loop. If the Flag is true the subroutine is called. The issue is that by the time the program loop comes round to servicing the flag, it is already more than 500ms,sometimes even >515 ms in case of certain code paths. The time taken is not accurately predictable.
Obviously, the subroutine cannot be called from inside the timer interrupt due to that large time it takes to execute.The subroutine takes 50ms to 89ms depending upon various conditions.
Is there a way to ensure that the subroutine is called in exactly 500ms each time?
I think you have some conflicting/not-thought-through requirements here. You say that you can't call this code from the timer ISR because it takes too long to run (implying that it is a lower-priority than something else which would be delayed), but then you are being hit by the fact that something else which should have been lower-priority is delaying it when you run it from the foreground path ('program loop').
If this work must happen at exactly 500ms, then run it from the timer routine, and deal with the fall-out from that. This is effectively what a pre-emptive RTOS would be doing anyway.
If you want it to run from the 'program loop', then you will have to make sure than nothing else which runs from that loop ever takes more than the maximum delay you can tolerate - often that means breaking your other long-running work into state-machines which can do a little bit of work per pass through the loop.
I don't think there's a way to guarantee it but this solution may provide an acceptable alternative.
Might I suggest not setting a flag but instead modifying a value?
Here's how it could work.
1/ Start a value at zero.
2/ Every 10ms interrupt, increase this value by 10 in the ISR (interrupt service routine).
3/ In the main loop, if the value is >= 500, subtract 500 from the value and do your 500ms activities.
You will have to be careful to watch for race conditions between the timer and main program in modifying the value.
This has the advantage that the function runs as close as possible to the 500ms boundaries regardless of latency or duration.
If, for some reason, your function starts 20ms late in one iteration, the value will already be 520 so your function will then set it to 20, meaning it will only wait 480ms before the next iteration.
That seems to me to be the best way to achieve what you want.
I haven't touched the 8051 for many years (assuming that's what C51 is targeting which seems a safe bet given your description) but it may have an instruction which will subtract 50 without an interrupt being possible. However, I seem to remember the architecture was pretty simple so you may have to disable or delay interrupts while it does the load/modify/store operation.
volatile int xtime = 0;
void isr_10ms(void) {
xtime += 10;
}
void loop(void) {
while (1) {
/* Do all your regular main stuff here. */
if (xtime >= 500) {
xtime -= 500;
/* Do your 500ms activity here */
}
}
}
You can also use two flags - a "pre-action" flag, and a "trigger" flag (using Mike F's as a starting point):
#define PREACTION_HOLD_TICKS (2)
#define TOTAL_WAIT_TICKS (10)
volatile unsigned char pre_action_flag;
volatile unsigned char trigger_flag;
static isr_ticks;
interrupt void timer0_isr (void) {
isr_ticks--;
if (!isr_ticks) {
isr_ticks=TOTAL_WAIT_TICKS;
trigger_flag=1;
} else {
if (isr_ticks==PREACTION_HOLD_TICKS)
preaction_flag=1;
}
}
// ...
int main(...) {
isr_ticks = TOTAL_WAIT_TICKS;
preaction_flag = 0;
tigger_flag = 0;
// ...
while (1) {
if (preaction_flag) {
preaction_flag=0;
while(!trigger_flag)
;
trigger_flag=0;
service_routine();
} else {
main_processing_routines();
}
}
}
A good option is to use an RTOS or write your own simple RTOS.
An RTOS to meet your needs will only need to do the following:
schedule periodic tasks
schedule round robin tasks
preform context switching
Your requirements are the following:
execute a periodic task every 500ms
in the extra time between execute round robin tasks ( doing non-time critical operations )
An RTOS like this will guarantee a 99.9% chance that your code will execute on time. I can't say 100% because whatever operations your do in your ISR's may interfere with the RTOS. This is a problem with 8-bit micro-controllers that can only execute one instruction at a time.
Writing an RTOS is tricky, but do-able. Here is an example of small ( 900 lines ) RTOS targeted at ATMEL's 8-bit AVR platform.
The following is the Report and Code created for the class CSC 460: Real Time Operating Systems ( at the University of Victoria ).
Would this do what you need?
#define FUDGE_MARGIN 2 //In 10ms increments
volatile unsigned int ticks = 0;
void timer_10ms_interrupt( void ) { ticks++; }
void mainloop( void )
{
unsigned int next_time = ticks+50;
while( 1 )
{
do_mainloopy_stuff();
if( ticks >= next_time-FUDGE_MARGIN )
{
while( ticks < next_time );
do_500ms_thingy();
next_time += 50;
}
}
}
NB: If you got behind with servicing your every-500ms task then this would queue them up, which may not be what you want.
One straightforward solution is to have a timer interrupt that fires off at 500ms...
If you have some flexibility in your hardware design, you can cascade the output of one timer to a second stage counter to get you a long time base. I forget, but I vaguely recall being able to cascade timers on the x51.
Ah, one more alternative for consideration -- the x51 architecture allow two levels of interrupt priorities. If you have some hardware flexibility, you can cause one of the external interrupt pins to be raised by the timer ISR at 500ms intervals, and then let the lower-level interrupt processing of your every-500ms code to occur.
Depending on your particular x51, you might be able to also generate a lower priority interrupt completely internal to your device.
See part 11.2 in this document I found on the web: http://www.esacademy.com/automation/docs/c51primer/c11.htm
Why do you have a time-critical routine that takes so long to run?
I agree with some of the others that there may be an architectural issue here.
If the purpose of having precise 500ms (or whatever) intervals is to have signal changes occuring at specific time intervals, you may be better off with a fast ISR that ouputs the new signals based on a previous calculation, and then set a flag that would cause the new calculation to run outside of the ISR.
Can you better describe what this long-running routine is doing, and what the need for the specific interval is for?
Addition based on the comments:
If you can insure that the time in the service routine is of a predictable duration, you might get away with missing the timer interrupt postings...
To take your example, if your timer interrupt is set for 10 ms periods, and you know your service routine will take 89ms, just go ahead and count up 41 timer interrupts, then do your 89 ms activity and miss eight timer interrupts (42nd to 49th).
Then, when your ISR exits (and clears the pending interrupt), the "first" interrupt of the next round of 500ms will occur about a ms later.
Given that you're "resource maxed" suggests that you have your other timer and interrupt sources also in use -- which means that relying on the main loop to be timed accurately isn't going to work, because those other interrupt sources could fire at the wrong moment.
If I'm interpretting your question correctly, you have:
a main loop
some high priority operation that needs to be run every 500ms, for a duration of up to 89ms
a 10ms timer that also performs a small number of operations.
There are three options as I see it.
The first is to use a second timer of a lower priority for your 500ms operations. You can still process your 10ms interrupt, and once complete continue servicing your 500ms timer interrupt.
Second option - doe you actually need to service your 10ms interrupt every 10ms? Is it doing anything other than time keeping? If not, and if your hardware will allow you to determine the number of 10ms ticks that have passed while processing your 500ms op's (ie. by not using the interrupts themselves), then can you start your 500ms op's within the 10ms interrupt and process the 10ms ticks that you missed when you're done.
Third option: To follow on from Justin Tanner's answer, it sounds like you could produce your own preemptive multitasking kernel to fill your requirements without too much trouble.
It sounds like all you need is two tasks - one for the main super loop and one for your 500ms task.
The code to swap between two contexts (ie. two copies of all of your registers, using different stack pointers) is very simple, and usually consists of a series of register pushes (to save the current context), a series of register pops (to restore your new context) and a return from interrupt instruction. Once your 500ms op's are complete, you restore the original context.
(I guess that strictly this is a hybrid of preemptive and cooperative multitasking, but that's not important right now)
edit:
There is a simple fourth option. Liberally pepper your main super loop with checks for whether the 500ms has elapsed, both before and after any lengthy operations.
Not exactly 500ms, but you may be able to reduce the latency to a tolerable level.