How can I write a pulse per second(PPS) signal that comes every 100 ms in C? - gps

I am working on a MTS board in FreeRTOS. Right now I am focused on the GPS module and it has 2 signals. One for waking it up and another one called PPS which is a signal normally at 1 but that goes to 0 every 100ms. I need to write the code in C for this second part.
I have done it this way but I don't think it's correct. I'm not sure if it's super simple ad I am over complicating it or vice versa. Any help would be appreciated.
while(1)
{
if((vGPIO1->vFIOPIN &= (1<<18))) // pps is 1
{
vTaskDelay(100/portTick_PERIOD_MS); // wait for 100ms
vGPIO1->vFIOCLR &= ~(1<<18); // after 100 ms goes to 0
}
}

Related

Why should I use & in this syntax? Problem with SPI register

I'm writing a program for SPI communication betweend LPC2109/2 and MCP4921. This is an assignment
on studies. My tutor ask me a question why "&" is necessary in this line? In this line we wait for the end of SPI transmission. Which answer should be right?
#define SPI_SPIF_bm (1<<7)
...
while((S0SPSR & SPI_SPIF_bm) == 0){}
We use "&" as logic AND, for instance: (0000 & 1000) gives us 0000 instead of (0000 | 1000) gives us 1000.
Can I use only this line of code: while((S0SPSR) == 0){}? In my opinion - no. We need to compare value in register S0SPSR with bit SPIF SPI_SPIF_bm.
Is there maybe different solution?
Attachment
User Manual for LPC2129/01: https://www.nxp.com/docs/en/user-guide/UM10114.pdf
The SPI peripheral of LPC2109/2 sets different bits of S0SPSR depending on the actual event that happens which may depend on external circumstances. For example if there's a write collision on the SPI line it sets the WCOL bit instead of SPIF.
If you use while((S0SPSR) == 0){} it will wait until either a successful transaction or an error happens because it will exit the loop if any of the bits of S0SPSR is set.
while((S0SPSR & SPI_SPIF_bm) == 0){} only checks if the transaction has completed successfully. It is a good practice to check the error bits too because in case of an error you would stuck in this loop forever as SPIF is never goint to be set.
For a robust solution I would go with something like this:
while(S0SPSR == 0) {}
if (S0SPSR & SPI_SPIF_bm) { /* SPI_SPIF_bm remains set until data register has not been accessed */
/* Success, read the data register, return data, etc. */
} else {
/* Handle error */
}
If you are interested in the particular type of the error you need to store S0SPSR in a variable in each cycle as those bits are cleared on reading S0SPSR. Also you should to add a counter or a more sophisticated timeout solution to the loop to exit if none of the flags are sets in a reasonable period.
You might think these errors would never happen because you have a simple circuit but they do happen in real life and it's worth doing proper error handling.

Embedded System - Polling

I have about 6 sensors (GPS, IMU, etc.) that I need to constantly collect data from. For my purposes, I need a reading from each (within a small time frame) to have a complete data packet. Right now I am using interrupts, but this results in more data from certain sensors than others, and, as mentioned, I need to have the data matched up.
Would it be better to move to a polling-based system in which I could poll each sensor in a set order? This way I could have data from each sensor every 'cycle'.
I am, however, worried about the speed of polling because this system needs to operate close to real time.
Polling combined with a "master timer interrupt" could be your friend here. Let's say that your "slowest" sensor can provide data on 20ms intervals, and that the others can be read faster. That's 50 updates per second. If that's close enough to real-time (probably is close for an IMU), perhaps you proceed like this:
Set up a 20ms timer.
When the timer goes off, set a flag inside an interrupt service routine:
volatile uint8_t timerFlag = 0;
ISR(TIMER_ISR_whatever)
{
timerFlag = 1; // nothing but a semaphore for later...
}
Then, in your main loop act when timerFlag says it's time:
while(1)
{
if(timerFlag == 1)
{
<read first device>
<read second device>
<you get the idea ;) >
timerflag = 0;
}
}
In this way you can read each device and keep their readings synched up. This is a typical way to solve this problem in the embedded space. Now, if you need data faster than 20ms, then you shorten the timer, etc. The big question, as it always is in situations like this, is "how fast can you poll" vs. "how fast do you need to poll." Only experimentation and knowing the characteristics and timing of your various devices can tell you that. But what I propose is a general solution when all the timings "fit."
EDIT, A DIFFERENT APPROACH
A more interrupt-based example:
volatile uint8_t device1Read = 0;
volatile uint8_t device2Read = 0;
etc...
ISR(device 1)
{
<read device>
device1Read = 1;
}
ISR(device 2)
{
<read device>
device2Read = 1;
}
etc...
// main loop
while(1)
{
if(device1Read == 1 && device2Read == 1 && etc...)
{
//< do something with your "packet" of data>
device1Read = 0;
device2Read = 0;
etc...
}
}
In this example, all your devices can be interrupt-driven but the main-loop processing is still governed, paced, by the cadence of the slowest interrupt. The latest complete reading from each device, regardless of speed or latency, can be used. Is this pattern closer to what you had in mind?
Polling is a pretty good and easy to implement idea in case your sensors can provide data practically instantly (in comparison to your desired output frequency). It does get into a nightmare when you have data sources that need a significant (or even variable) time to provide a reading or require an asynchronous "initiate/collect" cycle. You'd have to sort your polling cycles to accommodate the "slowest" data source.
What might be a solution in case you know the average "data conversion rate" of each of your sources, is to set up a number of timers (per data source) that trigger at poll time - data conversion rate and kick in the measurement from those timer ISRs. Then have one last timer that triggers at poll timer + some safety margin that collects all the conversion results.
On the other hand, your apparent problem of "having too many measurements" from the "fast" data sources wouldn't bother me too much as long as you don't have anything reasonable to do with that wasted CPU/sensor load.
A last and easier approach, in case you have some cycles to waste, is: Simply sort the data sources from "slowest" to "fastest" and initiate a measurement in that order, then wait for results in the same order and poll.

Use of timer interrupts in arduino stop the serial library functions why?

I am working with an arduino project.I am using timer interrupts and Serial communication.But as soon as the timer interrupts enables arduino Serial library functions are not working.I am stuck with this problem. Is there any way to do this. I want to use both Serial communication and timer interrupts.Use of the following function stops the Serial communication
void initialize()
{
//timer0
TIMSK0 = 2;
OCR0A = 125;
TCCR0A = 0b00000010; //commenting TCCR0A = 0b00000010; and TIMSK1 = 1 ; enable
TCCR0B = 0b00000011; // the serial communications
//timer1
TCCR1B = 1 ;
TIMSK1 = 1 ;
//timer2
TCCR2A = _BV(COM2A0) | _BV(WGM21) | _BV(WGM20);
TCCR2B = _BV(WGM22) | _BV(CS20);
OCR2A = B11000111;
EICRA = 63 ;
EIMSK = (1 << INT0) | (1 << INT1);
}
I would avoid using Timer0, directly. As it will mess with Arduino Core Libraries, as you are seeing.
On initial glance I would suggest using a proven library such as SimpleTimer(). It will setup and manage multiple events where its "run" basically pulls the millis() from timer 0. But read farther down.
I recall that Timer0 is setup by the core library to overrun at 1K creating interrupt. Where the micros() function read the value within timer0 between millisecond interrupts.
And for using Timer1 you can try TimerOne() library. There are also TimerTwo, 3 and etc.. out there.
You may want to read through Ken Shirriff's Arduino-IRremote library. As it does much of what you want, in discrete methods. Such as the 40Khz PWM. Rather than depending upon other libraries. Where his original library uses a
USECPERTICK 50 // microseconds per clock interrupt tick
to read and sample the receive input from a IR demodulator, as to decode the frames.
I would also point out microtherion's fork of the library, as it uses pin change interrupts to get more accurate pin changes. Where his library, again discretely manages these interrupts.
Were as one could use PinChangeInt Library to setup your implementation. Where the individual pin changes' ISR could capture the time stamp almost immediately. Minus latency where in this case is much less the 50ms desired resolution.
And if you really needed more resolution you can use the Input Capture Function. As demonstrated in InputCapture.ino. Which will capture the time of transition in real-time and generate an ISR for latent handling.
From these examples you should be able to implement your ultra sonic sensor.
I had the same problem, so i suggest to:
Use the TimerOne() library.
Use flags in the timer, so you can control when the time that you
programed has past.
In the loop function, you should use only the Serial.available(),
so the time would be as close as possible to what you want.
Dont write too much code in the loop function and control the
sensors reading with a switch or if function.
Its not the best solution, but it works. You have to be careful with the time programed, it should be higher than the the time expended in the data reading.

Simple Debounce Routine

Do you have a simple debounce routine handy to deal with a single switch input?
This is a simple bare metal system without any OS.
I would like to avoid a looping construct with a specific count, as the processor speed might fluctuate.
I think you could learn a lot about this here: http://www.ganssle.com/debouncing.pdf
Your best bet is always to do this in hardware if possible, but there are some thoughts on software in there as well.
Simple example code from TFA:
#define CHECK_MSEC 5 // Read hardware every 5 msec
#define PRESS_MSEC 10 // Stable time before registering pressed
#define RELEASE_MSEC 100 // Stable time before registering released
// This function reads the key state from the hardware.
extern bool_t RawKeyPressed();
// This holds the debounced state of the key.
bool_t DebouncedKeyPress = false;
// Service routine called every CHECK_MSEC to
// debounce both edges
void DebounceSwitch1(bool_t *Key_changed, bool_t *Key_pressed)
{
static uint8_t Count = RELEASE_MSEC / CHECK_MSEC;
bool_t RawState;
*Key_changed = false;
*Key_pressed = DebouncedKeyPress;
RawState = RawKeyPressed();
if (RawState == DebouncedKeyPress) {
// Set the timer which allows a change from current state.
if (DebouncedKeyPress) Count = RELEASE_MSEC / CHECK_MSEC;
else Count = PRESS_MSEC / CHECK_MSEC;
} else {
// Key has changed - wait for new state to become stable.
if (--Count == 0) {
// Timer expired - accept the change.
DebouncedKeyPress = RawState;
*Key_changed=true;
*Key_pressed=DebouncedKeyPress;
// And reset the timer.
if (DebouncedKeyPress) Count = RELEASE_MSEC / CHECK_MSEC;
else Count = PRESS_MSEC / CHECK_MSEC;
}
}
}
Simplest solutions are often the best, and I've found that simply only reading the switch state every N millseconds (between 10 and 50, depending on switches) has always worked for me.
I've stripped out broken and complex debounce routines and replaced them with a simple slow poll, and the results have always been good enough that way.
To implement it, you'll need a simple periodic timer interrupt on your system (assuming no RTOS support), but if you're used to programming it at the bare metal, that shouldn't be difficult to arrange.
Note that this simple approach adds a delay to detection of the change in state. If a switch takes T ms to reach a new steady state, and it's polled every X ms, then the worst case delay for detecting the press is T+X ms. Your polling interval X must be larger than the worst-case bounce time T.
There's no single simple solution that works for all types of buttons. No matter what someone here tells you to use, you'll have to try it with your hardware, and see how well it works. And look at the signals on a scope, to make sure you really know what's going on. Rich B's link to the pdf looks like a good place to start.
I have used a majority vote method to debounce an input. I set up a simple three state shift register type of data structure, and shift each sample and take the best two out of three as the "correct" value. This is obviously a function of either your interrupt handler, or a poller, depending on what method is used to actually read the hardware.
But, the best advice is to ask your friendly hardware designer to "latch" the value and allow you to clear this value when you get to it.
To debounce, you want to ignore any switch up that lasts under a certain threshold. You can set a hardware timer on switch up, or use a flag set via periodic interrupt.
If you can get away with it, the best solution in hardware is to have the switch have two distinct states with no state between. That is, use a SPDT switch, with each pole feeding either the R or S lines of a flip/flop. Wired that way, the output of the flip/flop should be debounced.
The algorithm from ganssle.com could have a bug in it. I have the impression the following line
static uint8_t Count = RELEASE_MSEC / CHECK_MSEC;
should read
static uint8_t Count = PRESS_MSEC / CHECK_MSEC;
in order to debounce correctly the initial press.
At the hardware level the basic debouncing routine has to take into account the following segments of a physical key's (or switch's) behavior:
Key sitting quietly->finger touches key and begins pushing down->key reaches bottom of travel and finger holds it there->finger begins releasing key and spring pushes key back up->finger releases key and key vibrates a bit until it quiesces
All of these stages involve 2 pieces of metal scraping and rubbing and bumping against each other, jiggling the voltage up and down from 0 to maximum over periods of milliseconds, so there is electrical noise every step of the way:
(1) Noise while the key is not being touched, caused by environmental issues like humidity, vibration, temperature changes, etc. causing voltage changes in the key contacts
(2) Noise caused as the key is being pressed down
(3) Noise as the key is being held down
(4) Noise as the key is being released
(5) Noise as the key vibrates after being released
Here's the algorithm by which we basically guess that the key is being pressed by a person:
read the state of the key, which can be "might be pressed", "definitely is pressed", "definitely is not pressed", "might not be pressed" (we're never really sure)
loop while key "might be" pressed (if dealing with hardware, this is a voltage sample greater than some threshold value), until is is "definitely not" pressed (lower than the threshold voltage)
(this is initialization, waiting for noise to quiesce, definition of "might be" and "definitely not" is dependent on specific application)
loop while key is "definitely not" pressed, until key "might be" pressed
when key "might be" pressed, begin looping and sampling the state of the key, and keep track of how long the key "might be" pressed
- if the key goes back to "might not be" or "definitely is not" pressed state before a certain amount of time, restart the procedure
- at a certain time (number of milliseconds) that you have chosen (usually through experimenting with different values) you decide that the sample value is no longer caused by noise, but is very likely caused by the key actually being held down by a human finger and you return the value "pressed"
while(keyvalue = maybepressed){
//loop - wait for transition to notpressed
sample keyvalue here;
maybe require it to be "notpressed" a number of times before you assume
it's really notpressed;
}
while(keyvalue = notpressed){
//loop - wait for transition to maybepressed
sample keyvalue
again, maybe require a "maybepressed" value a number of times before you
transition
}
while(keyvalue=maybepressed){
presstime+=1;
if presstime>required_presstime return pressed_affirmative
}
}
return pressed_negative
What I usually do is have three or so variables the width of the input register. Every poll, usually from an interrupt, shift the values up one to make way for the new sample. Then I have a debounced variable formed by setting the logical-and of the samples, and clearing the inverse logical-or. i.e. (untested, from memory)
input3 = input2;
input2 = input1;
input1 = (*PORTA);
debounced |= input1 & input2 & input3;
debounced &= (input1 | input2 | input3);
Here's an example:
debounced has xxxx (where 'x' is "whatever")
input1 = 0110,
input2 = 1100,
input3 = 0100
With the information above,
We need to switch only bit 2 to 1, and bit 0 to 0. The rest are still "bouncing".
debounced |= (0100); //set only bit 2
debounced &= (1110); //clear only bit 0
The result is that now debounced = x1x0
use integration and you'll be a happy camper. Works well for all switches.
just increment a counter when read as high and decrement it when read as low and when the integrator reaches a limit (upper or lower) call the state (high or low).
The whole concept is described well by Jack Ganssle. His solution posted as an answer to the original question is very good, but I find part of it not so clear how does it work.
There are three main ways how to deal with switch bouncing:
- using polling
- using interrupts
- combination of interrupts and pooling.
As I deal mostly with embedded systems that are low-power or tend to be low-power so the answer from Keith to integrate is very reasonable to me.
If you work with SPST push button type switch with one mechanically stable position then I would prefer the solution which works using a combination of interrupt and pooling.
Like this: use GPIO input interrupt to detect first edge (falling or rising, the opposite direction of un-actuated switch state). Under GPIO input ISR set flag about detection.
Use another interrupt for measuring time (ie. general purpose timer or SysTick) to count milliseconds.
On every SysTick increment (1 ms):
IF buttonFlag is true then call function to poll the state of push button (polling).
Do this for N consecutive SysTick increments then clear the flag.
When you poll the button state use logic as you wish to decide button state like M consecutive readings same, average more than Z, count if the state, last X readings the same, etc.
I think this approach should benefit from responsiveness on interrupt and lower power usage as there will be no button polling after N SysTick increments. There are no complicated interrupt modifications between various interrupts so the program code should be fairly simple and readable.
Take into consideration things like: do you need to "release" button, do you need to detect long press and do you need action on button release. I don't like button action on button release, but some solutions work that way.

Precisely time a function call

I am using a microcontroller with a C51 core. I have a fairly timeconsuming and large subroutine that needs to be called every 500ms. An RTOS is not being used.
The way I am doing it right now is that I have an existing Timer interrupt of 10 ms. I set a flag after every 50 interrupts that is checked for being true in the main program loop. If the Flag is true the subroutine is called. The issue is that by the time the program loop comes round to servicing the flag, it is already more than 500ms,sometimes even >515 ms in case of certain code paths. The time taken is not accurately predictable.
Obviously, the subroutine cannot be called from inside the timer interrupt due to that large time it takes to execute.The subroutine takes 50ms to 89ms depending upon various conditions.
Is there a way to ensure that the subroutine is called in exactly 500ms each time?
I think you have some conflicting/not-thought-through requirements here. You say that you can't call this code from the timer ISR because it takes too long to run (implying that it is a lower-priority than something else which would be delayed), but then you are being hit by the fact that something else which should have been lower-priority is delaying it when you run it from the foreground path ('program loop').
If this work must happen at exactly 500ms, then run it from the timer routine, and deal with the fall-out from that. This is effectively what a pre-emptive RTOS would be doing anyway.
If you want it to run from the 'program loop', then you will have to make sure than nothing else which runs from that loop ever takes more than the maximum delay you can tolerate - often that means breaking your other long-running work into state-machines which can do a little bit of work per pass through the loop.
I don't think there's a way to guarantee it but this solution may provide an acceptable alternative.
Might I suggest not setting a flag but instead modifying a value?
Here's how it could work.
1/ Start a value at zero.
2/ Every 10ms interrupt, increase this value by 10 in the ISR (interrupt service routine).
3/ In the main loop, if the value is >= 500, subtract 500 from the value and do your 500ms activities.
You will have to be careful to watch for race conditions between the timer and main program in modifying the value.
This has the advantage that the function runs as close as possible to the 500ms boundaries regardless of latency or duration.
If, for some reason, your function starts 20ms late in one iteration, the value will already be 520 so your function will then set it to 20, meaning it will only wait 480ms before the next iteration.
That seems to me to be the best way to achieve what you want.
I haven't touched the 8051 for many years (assuming that's what C51 is targeting which seems a safe bet given your description) but it may have an instruction which will subtract 50 without an interrupt being possible. However, I seem to remember the architecture was pretty simple so you may have to disable or delay interrupts while it does the load/modify/store operation.
volatile int xtime = 0;
void isr_10ms(void) {
xtime += 10;
}
void loop(void) {
while (1) {
/* Do all your regular main stuff here. */
if (xtime >= 500) {
xtime -= 500;
/* Do your 500ms activity here */
}
}
}
You can also use two flags - a "pre-action" flag, and a "trigger" flag (using Mike F's as a starting point):
#define PREACTION_HOLD_TICKS (2)
#define TOTAL_WAIT_TICKS (10)
volatile unsigned char pre_action_flag;
volatile unsigned char trigger_flag;
static isr_ticks;
interrupt void timer0_isr (void) {
isr_ticks--;
if (!isr_ticks) {
isr_ticks=TOTAL_WAIT_TICKS;
trigger_flag=1;
} else {
if (isr_ticks==PREACTION_HOLD_TICKS)
preaction_flag=1;
}
}
// ...
int main(...) {
isr_ticks = TOTAL_WAIT_TICKS;
preaction_flag = 0;
tigger_flag = 0;
// ...
while (1) {
if (preaction_flag) {
preaction_flag=0;
while(!trigger_flag)
;
trigger_flag=0;
service_routine();
} else {
main_processing_routines();
}
}
}
A good option is to use an RTOS or write your own simple RTOS.
An RTOS to meet your needs will only need to do the following:
schedule periodic tasks
schedule round robin tasks
preform context switching
Your requirements are the following:
execute a periodic task every 500ms
in the extra time between execute round robin tasks ( doing non-time critical operations )
An RTOS like this will guarantee a 99.9% chance that your code will execute on time. I can't say 100% because whatever operations your do in your ISR's may interfere with the RTOS. This is a problem with 8-bit micro-controllers that can only execute one instruction at a time.
Writing an RTOS is tricky, but do-able. Here is an example of small ( 900 lines ) RTOS targeted at ATMEL's 8-bit AVR platform.
The following is the Report and Code created for the class CSC 460: Real Time Operating Systems ( at the University of Victoria ).
Would this do what you need?
#define FUDGE_MARGIN 2 //In 10ms increments
volatile unsigned int ticks = 0;
void timer_10ms_interrupt( void ) { ticks++; }
void mainloop( void )
{
unsigned int next_time = ticks+50;
while( 1 )
{
do_mainloopy_stuff();
if( ticks >= next_time-FUDGE_MARGIN )
{
while( ticks < next_time );
do_500ms_thingy();
next_time += 50;
}
}
}
NB: If you got behind with servicing your every-500ms task then this would queue them up, which may not be what you want.
One straightforward solution is to have a timer interrupt that fires off at 500ms...
If you have some flexibility in your hardware design, you can cascade the output of one timer to a second stage counter to get you a long time base. I forget, but I vaguely recall being able to cascade timers on the x51.
Ah, one more alternative for consideration -- the x51 architecture allow two levels of interrupt priorities. If you have some hardware flexibility, you can cause one of the external interrupt pins to be raised by the timer ISR at 500ms intervals, and then let the lower-level interrupt processing of your every-500ms code to occur.
Depending on your particular x51, you might be able to also generate a lower priority interrupt completely internal to your device.
See part 11.2 in this document I found on the web: http://www.esacademy.com/automation/docs/c51primer/c11.htm
Why do you have a time-critical routine that takes so long to run?
I agree with some of the others that there may be an architectural issue here.
If the purpose of having precise 500ms (or whatever) intervals is to have signal changes occuring at specific time intervals, you may be better off with a fast ISR that ouputs the new signals based on a previous calculation, and then set a flag that would cause the new calculation to run outside of the ISR.
Can you better describe what this long-running routine is doing, and what the need for the specific interval is for?
Addition based on the comments:
If you can insure that the time in the service routine is of a predictable duration, you might get away with missing the timer interrupt postings...
To take your example, if your timer interrupt is set for 10 ms periods, and you know your service routine will take 89ms, just go ahead and count up 41 timer interrupts, then do your 89 ms activity and miss eight timer interrupts (42nd to 49th).
Then, when your ISR exits (and clears the pending interrupt), the "first" interrupt of the next round of 500ms will occur about a ms later.
Given that you're "resource maxed" suggests that you have your other timer and interrupt sources also in use -- which means that relying on the main loop to be timed accurately isn't going to work, because those other interrupt sources could fire at the wrong moment.
If I'm interpretting your question correctly, you have:
a main loop
some high priority operation that needs to be run every 500ms, for a duration of up to 89ms
a 10ms timer that also performs a small number of operations.
There are three options as I see it.
The first is to use a second timer of a lower priority for your 500ms operations. You can still process your 10ms interrupt, and once complete continue servicing your 500ms timer interrupt.
Second option - doe you actually need to service your 10ms interrupt every 10ms? Is it doing anything other than time keeping? If not, and if your hardware will allow you to determine the number of 10ms ticks that have passed while processing your 500ms op's (ie. by not using the interrupts themselves), then can you start your 500ms op's within the 10ms interrupt and process the 10ms ticks that you missed when you're done.
Third option: To follow on from Justin Tanner's answer, it sounds like you could produce your own preemptive multitasking kernel to fill your requirements without too much trouble.
It sounds like all you need is two tasks - one for the main super loop and one for your 500ms task.
The code to swap between two contexts (ie. two copies of all of your registers, using different stack pointers) is very simple, and usually consists of a series of register pushes (to save the current context), a series of register pops (to restore your new context) and a return from interrupt instruction. Once your 500ms op's are complete, you restore the original context.
(I guess that strictly this is a hybrid of preemptive and cooperative multitasking, but that's not important right now)
edit:
There is a simple fourth option. Liberally pepper your main super loop with checks for whether the 500ms has elapsed, both before and after any lengthy operations.
Not exactly 500ms, but you may be able to reduce the latency to a tolerable level.