Runtime Error With Interrupt Timer on Atmega2560 - interrupt

I'm trying to make a loop execute regularly every 50 milliseconds on an Atmega 2560. Using a simple delay function won't work, because the total loop time ends up being the time it took to execute the other functions in the loop, plus your delay time. This works even less well if your functions calls take variable time, which they usually will.
To solve this, I implemented a simple timer class:
volatile unsigned long timer0_ms_tick;
timer::timer()
{
// Set timer0 registers
TCCR0A = 0b00000000; // Nothing here
TCCR0B = 0b00000000; // Timer stopped, begin function start by setting last three bits to 011 for prescaler of 64
TIMSK0 = 0b00000001; // Last bit to 1 to enable timer0 OFV interrupt enable
sei(); // Enable global interrupts
}
void timer::start()
{
timer0_ms_tick = 0;
// Set timer value for 1ms tick (2500000 ticks/sec)*(1 OFV/250 ticks) = 1000OVF/sec
// 256ticks - 250ticks - 6 ticks, but starting at 0 means setting to 5
TCNT0 = 5;
// Set prescaler and start timer
TCCR0B = 0b00000011;
}
unsigned long timer::now_ms()
{
return timer0_ms_tick;
}
ISR(TIMER0_OVF_vect)
{
timer0_ms_tick+=1;
TCNT0 = 5;
}
The main loop uses this like so:
unsigned long startTime, now;
while(true)
{
startTime = startup_timer.now_ms();
/* Loop Functions */
// Wait time step
now = startup_timer.now_ms();
while(now-startTime < 50)
{
now = startup_timer.now_ms();
}
Serial0.print(ltoa(now,time_string, 10));
Serial0.writeChar('-');
Serial0.print(ltoa(startTime,time_string, 10));
Serial0.writeChar('=');
Serial0.println(ltoa(now-startTime,time_string, 10));
}
My output looks like this:
11600-11550=50
11652-11602=50
11704-11654=50
11756-11706=50
12031-11758=273
11828-11778=50
11880-11830=50
11932-11882=50
11984-11934=50
12036-11986=50
12088-12038=50
12140-12090=50
12192-12142=50
12244-12194=50
12296-12246=50
12348-12298=50
12400-12350=50
12452-12402=50
12504-12454=50
12556-12506=50
12608-12558=50
12660-12610=50
12712-12662=50
12764-12714=50
12816-12766=50
12868-12818=50
12920-12870=50
12972-12922=50
13024-12974=50
13076-13026=50
13128-13078=50
13180-13130=50
13232-13182=50
13284-13234=50
13336-13286=50
13388-13338=50
13440-13390=50
13492-13442=50
13544-13494=50
13823-13546=277
13620-13570=50
It seems to work well most of the time, but every once in a while something odd will happen with the timing values. I think it has something to do with the interrupt, but I'm not sure what. Any help would be greatly appreciated.

Related

dispatch_apply leaves one thread "hanging"

I am experimenting with multithreading following Apples Concurrency Programming Guide. The multithreaded function (dispatch_apply) replacing the for-loop seems straightforward and works fine with a simple printf statement. However, if the block calls a more cpu-intensive calculation, the program never ends or executes past dispatch_apply, and one thread (main thread?) seems stuck at 100%.
#import <Foundation/Foundation.h>
#define THREAD_COUNT 16
unsigned long long longest = 0;
unsigned long long highest = 0;
void three_n_plus_one(unsigned long step);
int main(int argc, const char * argv[]) {
#autoreleasepool {
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_apply(THREAD_COUNT, queue, ^(size_t i) {
three_n_plus_one(i);
});
}
return 0;
}
void three_n_plus_one(unsigned long step) {
unsigned long long start = step;
unsigned long long end = 1000000;
unsigned long long identifier = 0;
while (start <= end) {
unsigned long long current = start;
unsigned long long sequence = 1;
while (current != 1) {
sequence += 1;
if(current % 2 == 0)
current = current / 2;
else {
current = (current * 3) + 1;
}
if (current > highest) highest = current;
}
if (sequence > longest) {
longest = sequence;
identifier = start;
printf("thread %lu, number %llu with %llu steps to one\n", step, identifier, longest);
}
start += (unsigned long long)THREAD_COUNT;
}
}
Still, the loop seems to be finished. From what I understand, this should be fairly straight forward, still I'm left clueless as to what I'm doing wrong here.
What you're calling step is the index of the loop. It goes from 0 to THREAD_COUNT-1 in your code. Since you assign start to be step, that means your first iteration tries to compute the Colatz sequence starting at zero. That computes 0/2 == 0 and so is an infinite loop.
What you meant to write is:
unsigned long long start = step + 1;
Calling your block size "THREAD_COUNT" is misleading. The question is not how many threads are created (no threads may be created; that's up to the system). The question is how many chunks to divide the work into.
Note that reading and writing to longest and highest on multiple threads without synchronization is undefined behavior, so this code may do surprising things (particularly when optimized). Don't assume it's limited to getting the wrong values in longest and highest. The optimizer is allowed to assume no other thread touches those values while it runs, and can rearrange code dramatically based on that. But that's not the cause of this particular issue.
As Rob Napier said (+1), the reason one thread is “hanging” is because you have the endless loop when supplying zero, not because of any problem with dispatch_apply (called concurrentPerform in Swift).
But, the more subtle issue (and what makes concurrent code a little less “straightforward”) is that this code is not thread-safe. There are “data races”. You are accessing and mutating highest and longest concurrently from multiple threads. I would encourage using Thread Sanitizer (TSAN) when testing concurrent code, which is pretty good at identifying these data races.
E.g., edit your scheme and temporarily turn on the thread-sanitizer:
Then, when you run, it will warn you about the data races:
You can fix these races by synchronizing your access to these variables. A lock is one simple mechanism. I would also avoid doing a synchronization within the inner while loop, if you can. In this case, you can even remove it from the outer while loop, too. In this case, I might suggest a local variables to keep track of the current “longest” sequence, the “highest” value, and the identifier for that highest value, and then only compare to and update the shared variable when you are done, outside of both loops.
E.g. perhaps:
- (void)three_n_plus_one:(unsigned long) step {
unsigned long long start = step + 1;
unsigned long long end = 1000000;
unsigned long long tempHighest = start;
unsigned long long tempLongest = 1;
unsigned long long tempIdentifier = start;
while (start <= end) {
unsigned long long current = start;
unsigned long long sequence = 1;
while (current != 1) {
sequence += 1;
if (current % 2 == 0)
current = current / 2;
else {
current = (current * 3) + 1;
}
if (current > tempHighest) tempHighest = current;
}
if (sequence > tempLongest) {
tempLongest = sequence;
tempIdentifier = start;
}
start += (unsigned long long)THREAD_COUNT;
}
[lock lock]; // synchronize updating of shared memory
if (tempHighest > highest) {
highest = tempHighest;
}
if (tempLongest > longest) {
longest = tempLongest;
identifier = tempIdentifier;
}
[lock unlock];
}
I used a NSLock, but use whatever synchronization mechanism you want. But the idea is (a) to make sure to synchronize all interaction with shared memory and; (b) to reduce the necessary number of synchronizations to a bare minimum. (In this case, a naïve synchronization approach was 200× slower than the above, which minimizes the number of synchronizations to the bare minimum.)
When you are done fixing the data races, you can then turn TSAN off.

MSP430 timer clock divider doesn't work

I am trying to do simple PWM with MSP430. Working with timer I am facing one issue. I have noticed that clock divider doesn't make any sence eather I set ID_3 that suppose divide clock by 8, or I set ID_1 or ID_2. The output frequency that I am seeing with the scope is 130Hz. Is there any mistakes?
#include "msp430g2553.h"
volatile unsigned long i;
volatile unsigned int D1=50;
void main(void)
{
i=0;
WDTCTL = WDTPW + WDTHOLD; // Stop WDT
CCTL0 = CCIE; // CCR0 interrupt enabled
TACTL = TASSEL_2 + MC_1 + ID_1; // SMCLK, upmode MC1
CCR0 = 5; // Timer should count up to CCR) and reset
P1OUT &= 0x00; // Shut down everything
P1DIR &= 0x00;
P1DIR |= BIT0; // P1.0 pin output
_BIS_SR(CPUOFF + GIE); // Enter LPM0 w/ interrupt
while(1) //Loop forever, we work with interrupts!
{}
}
// Timer A0 interrupt service routine
#pragma vector=TIMER0_A0_VECTOR
__interrupt void Timer_A (void)
{
i=i+1;
if (i>=100) {i=0;}
if (i<=D1) {P1OUT = BIT0;}
if (i>D1) {P1OUT &= 0x00;}
}
By default, SMCLK and the CPU run at the same frequency (about 1.1 MHz).
The interrupt handler needs much longer than five cycles to run, so the output speed is determined not by how you configure the timer but by how fast the code in Timer_A() can run.
You could try to optimize the interrupt handler (i does not need to have 32 bits, etc.) and to use a longer timer interval.
But it might be a better idea to configure the timer for hardware PWM.

control led brightness of microcontroller rtos/bios

i'm trying to control my led in 256 (0-255) different levels of brightness. my controller is set to 80mhz and running on rtos. i'm setting the clock module to interrupt every 5 microseconds and the brightness e.g. to 150. the led is dimming, but i'm not sure if it is done right to really have 256 different levels
int counter = 1;
int brightness = 0;
void SetUp(void)
{
SysCtlClockSet(SYSCTL_SYSDIV_2_5|SYSCTL_USE_PLL|SYSCTL_OSC_MAIN|SYSCTL_XTAL_16MHZ);
GPIOPinTypeGPIOOutput(PORT_4, PIN_1);
Clock_Params clockParams;
Clock_Handle myClock;
Error_Block eb;
Error_init(&eb);
Clock_Params_init(&clockParams);
clockParams.period = 400; // every 5 microseconds
clockParams.startFlag = TRUE;
myClock = Clock_create(myHandler1, 400, &clockParams, &eb);
if (myClock == NULL) {
System_abort("Clock create failed");
}
}
void myHandler1 (){
brightness = 150;
while(1){
counter = (++counter) % 256;
if (counter < brightness){
GPIOPinWrite(PORT_4, PIN_1, PIN_1);
}else{
GPIOPinWrite(PORT_4, PIN_1, 0);
}
}
}
A 5 microsecond interrupt is a tall ask for an 80 MHz processor, and will leave little time for other work, and if you are not doing other work, you need not use interrupts at all - you could simply poll the clock counter; then it would still be a lot of processor to throw at a rather trivial task - and the RTOS is overkill too.
A better way to perform your task is to use the timer's PWM (Pulse Width Modulation) feature. You will then be able to accurately control the brightness with zero software overhead; leaving your processor to do more interesting things.
Using a PWM you could manage with a far lower performance processor if LED control is all it will do.
If you must use an interrupt/GPIO (for example your timer does not support PWM generation or the LED is not connected to a PWM capable pin), then it would be more efficient to set the timer incrementally. So for example for a mark:space of 150:105, you would set the timer for 150*5us (9.6ms), on the interrupt toggle the GPIO, then set the timer to 105*5us (6.72ms).
A major problem with your solution is the interrupt handler does not return - interrupts must run to completion and be as short as possible and preferably deterministic in execution time.
Without using hardware PWM, the following based on your code fragment is probably closer to what you need:
#define PWM_QUANTA = 400 ; // 5us
static volatile uint8_t brightness = 150 ;
static Clock_Handle myClock ;
void setBrightness( uint8_t br )
{
brightness = br ;
}
void SetUp(void)
{
SysCtlClockSet(SYSCTL_SYSDIV_2_5|SYSCTL_USE_PLL|SYSCTL_OSC_MAIN|SYSCTL_XTAL_16MHZ);
GPIOPinTypeGPIOOutput(PORT_4, PIN_1);
Clock_Params clockParams;
Error_Block eb;
Error_init(&eb);
Clock_Params_init(&clockParams);
clockParams.period = brightness * PWM_QUANTA ;
clockParams.startFlag = TRUE;
myClock = Clock_create(myHandler1, 400, &clockParams, &eb);
if (myClock == NULL)
{
System_abort("Clock create failed");
}
}
void myHandler1(void)
{
static int pin_state = 1 ;
// Toggle pin state and timer period
if( pin_state == 0 )
{
pin_sate = 1 ;
Clock_setPeriod( myClock, brightness * PWM_QUANTA ) ;
}
else
{
pin_sate = 0 ;
Clock_setPeriod( myClock, (255 - brightness) * PWM_QUANTA ) ;
}
// Set pin state
GPIOPinWrite(PORT_4, PIN_1, pin_state) ;
}
On the urging of Clifford I am elaborating on an alternate strategy for reducing the load of software dimming as as servicing interrupts every 400 clock cycles may prove difficult. The preferred solution should of course be to use hardware pulse-width modulation whenever available.
One option is to set interrupts only at the PWM flanks. Unfortunately this strategy tends to introduce races and drift as time elapses while adjustments are taking place and scales poorly to multiple channels.
Alternative we may switch from pulse-width to delta-sigma modulation. There is a fair bit of theory behind the concept but in this context it boils down to toggling the pin on and off as quickly as possible while maintaining an average on-time proportional to the dimming level. As a consequence the interrupt frequency may be reduced without bringing the overall switching frequency down to visible levels.
Below is an example implementation:
// Brightness to display. More than 8-bits are required to handle full 257-step range.
// The resolution also course be increased if desired.
volatile unsigned int brightness = 150;
void Interrupt(void) {
// Increment the accumulator with the desired brightness
static uint8_t accum;
unsigned int addend = brightness;
accum += addend;
// Light the LED pin on overflow, relying on native integer wraparound.
// Consequently higher brightness values translate to keeping the LED lit more often
GPIOPinWrite(PORT_4, PIN_1, accum < addend);
}
A limitation is that the switching frequency decreases with the distance from 50% brightness. Thus the final N steps may need to be clamped to 0 or 256 to prevent visible flicker.
Oh, and take care if switching losses are a concern in your application.

How can i change background color in a while loop - processing

I'm new to processing and trying to make a very simple program where i have an arduino that produces a seriel input (according to an analogue read value). The idea is a Processing window will open with a block color shown for 30 seconds. In this time all the readings from the arduino will be summed and averaged - creating an average for that color.
After 30 seconds the colour will change and a new average (for the next color) will start being calculated. This is the code i have started to write (for now focusing on just one 30 second period of green).
I realise there are likely problems with the reading/summing and averaging (i havent researched these yet so i'll put that to one side) - but my main question is why isn't the background green? When i run this program i expect the background to be green for 30 seconds - where as what happens is it is white for 30 seconds then changes to green. Can't figure out why! Thanks for any help!
import processing.serial.*;
Serial myPort;
float gsrAverage;
float greenAverage;
int gsrValue;
int greenTotal = 0;
int greenCount = 1;
int timeSinceStart = 0;
int timeAtStart;
int count=0;
color green = color(118,236,0);
void setup () {
size(900, 450);
// List all the available serial ports
//println(Serial.list());
myPort = new Serial(this, Serial.list()[0], 9600);
}
void draw () {
while (timeSinceStart < 30000) {
background(green);
greenTotal = greenTotal + gsrValue;
greenCount = greenCount + 1;
delay(500);
timeSinceStart = millis()-timeAtStart;
//println(timeSinceStart); for de bugging
}
greenAverage = greenTotal/greenCount;
//println(greenAverage); for de bugging
}
void serialEvent (Serial myPort) {
int inByte=myPort.read();
//0-255
gsrValue=inByte;
}
What I like to do for timers, is use IF statements and use millis() or a constantly updated variable 'm' right inside the condition:
int timeSinceStart;
int m;
void setup(){
timeSinceStart = millis(); // initialize here so it only happens once
}
void draw(){
m = millis(); // constantly update the variable
if(timeSinceStart + 30000 < m){
greenAverage = greenTotal/greenCount; // or whatever is outside while loop
timeSinceStart = millis();
}
//Anything that went inside the while loop can go here, or above the IF
}
This makes it so around every 30 seconds, the background will change once, and you just re-update the timeSinceStart variable in there too. This way, it will only update when you want it to update and not constantly update and break the code.
I tend not to use while loops in processing as they usually cause headaches. Hope my example helps.
May have found a way round this using an IF statement. I perhaps looked over the fact the draw function is itself a loop, so i was able to use a variation of
if (timeSinceStart < 5000) {
background(green);
}
within draw.
When dealing with timed events in Processing you should not use while loops inside the draw() function. The draw() function itself is a while loop which updates the "screen" each frame.
So, what you should do is create a timer and let it do a switch for you inside the draw() function. In your case, if you want to start with a green screen, you do that in the setup() function, and then create a method for altering according to a timer in your draw() function.
This is a suggestion on how you could solve your particular problem. Just change the cycle variable according to your need. In your case it would be 30000.
boolean isGreen = true;
int startTime = 0;
int lastTime = 0;
int cycle = 1000; //the cycle you need
void setup() {
size(200, 200);
background(0, 255, 0); //green
}
void draw() {
startTime = millis();
if (startTime > lastTime + cycle) {
if (isGreen) {
background(255); //white
isGreen = !isGreen;
} else {
background(0, 255, 0); //green
isGreen = !isGreen;
}
lastTime = millis();
}
}

how to suspend for 200 ticks while delay 400 ticks in vxworks

I'm trying to code a program in vxworks. When a task total delay is 400 ticks, it was suspended at the 100th tick for 20 ticks, then resume to delay.
My main code is like the following:
void DelaySuspend (int level)
{
int tid, suspend_start,suspend_end,i;
suspend_start = vxTicks + 100;
suspend_end = vxTicks + 120;
i = vxTicks;
/* myfunction has taskDelay(400)*/
tid = taskSpawn("tMytask",200,0,2000,(FUNCPTR)myfunction,0,0,0,0,0,0,0,0,0,0);
/* tick between vxTicks+100 and vxTicks+120,suspend tMytask*/
while (i<suspend_start)
{
i=tickGet();
}
while (i <= suspend_end &&i >= suspend_start)
{
i = tickGet();
taskSuspend(tid);
}
}
What I want is to verify total delay time(or tick) doesn't change even I suspend the task for some time. I know the answer but just try to program it to show how vxWorks does it.
I am still not 100% clear on what you are trying to do, but calling taskSuspend in a loop like that isn't going to suspend the task any more. I am guessing you want something like this:
void DelaySuspend (int level)
{
int tid, suspend_start,suspend_end,i;
suspend_start = vxTicks + 100;
suspend_end = vxTicks + 120;
i = vxTicks;
/* myfunction has taskDelay(400)*/
tid = taskSpawn("tMytask",200,0,2000,(FUNCPTR)myfunction,0,0,0,0,0,0,0,0,0,0);
/* tick between vxTicks+100 and vxTicks+120,suspend tMytask*/
while (i<suspend_start)
{
i=tickGet();
}
taskSuspend(tid);
while (i <= suspend_end &&i >= suspend_start)
{
i = tickGet();
}
}
I just pulled the taskSuspend out of the loop, maybe you also want a taskResume in there after the loop or something? I am not sure what you are attempting to accomplish.
Whatever the case, there are probably better ways to do whatever you want, in general using taskSuspend is a bad idea because you have no idea what the task is doing when you suspend it. So for example if the suspended task is doing File I/O when you suspend it, and it has the file system mutex, then you cannot do any file I/O until you resume that task...
In general it is much better to block on a taskDelay/semaphore/mutex/message queue than use taskSuspend. I understand that this is just a test, and as such doing this may be ok, but if this test becomes production code, then you are asking for problems.