How can I model this code in promela/SP? - spin

The following algorithm attempts to enforce mutual exclusion
between two processes P1 and P2 each of which runs the code below.
You can assume that initially sema = 0.
while true do{
atomic{if sema = 0
then sema:= 1
else
go to line 2}
critical section;
sema:= 0;
}
How Can I model this code in promela/SPIN?
Thank you.

This should be quite straightforward:
active proctype P() {
bit sema = 0;
do
:: atomic {
sema == 0 -> sema = 1
}
// critical section
sema = 0
od
}
Possibly you do not need the do loop, if in your code you only needed it for some active waiting. The atomic block is only executable if sema is set to 0, and then it executes at once. Spin has a built-in passive waiting semantics.

Related

Use UART events with Simplelink in Contiki-ng

I'm actually trying to receive serial line message using the UART0 of a CC1310 with Contiki-ng.
I have implemented a uart callback function with will group all the characters received into a single variable, and stop collecting when it receives a "\n" character.
int uart_handler(unsigned char c)
{
if (c == '\n')
{
end_of_string = 1;
index = 0;
}
else
{
received_message_from_uart[index] = c;
index++;
}
return 0;
}
The uart0 is initialised at the main and only process of the system and then it waits in a infinite while loop until the flag end_of_string
PROCESS_THREAD(udp_client_process, ev, data)
{
PROCESS_BEGIN();
uart0_init();
uart0_set_callback(uart_handler);
while (1)
{
//wait for uart message
if (end_of_string == 1 && received_message_from_uart[0] != 0)
{
LOG_INFO("Received mensaje\n");
index = 0;
end_of_string = 0;
// Delete received message from uart
memset(received_message_from_uart, 0, sizeof received_message_from_uart);
}
etimer_set(&timer, 0.1 * CLOCK_SECOND);
PROCESS_WAIT_EVENT_UNTIL(etimer_expired(&timer));
}
PROCESS_END();
}
As you can see, the while loop stops at each iteration with a minimum time timer event. Although this method works, I think it is a bad way to do it, as I have read that there are uart events that can be used, but I have not been able to find anything for the CC1310.
Is it possible to use the CC1310 (or any other simplelink platform) UART with events and stop doing unnecessary iterations until a message has reached the device?

dispatch_apply leaves one thread "hanging"

I am experimenting with multithreading following Apples Concurrency Programming Guide. The multithreaded function (dispatch_apply) replacing the for-loop seems straightforward and works fine with a simple printf statement. However, if the block calls a more cpu-intensive calculation, the program never ends or executes past dispatch_apply, and one thread (main thread?) seems stuck at 100%.
#import <Foundation/Foundation.h>
#define THREAD_COUNT 16
unsigned long long longest = 0;
unsigned long long highest = 0;
void three_n_plus_one(unsigned long step);
int main(int argc, const char * argv[]) {
#autoreleasepool {
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_apply(THREAD_COUNT, queue, ^(size_t i) {
three_n_plus_one(i);
});
}
return 0;
}
void three_n_plus_one(unsigned long step) {
unsigned long long start = step;
unsigned long long end = 1000000;
unsigned long long identifier = 0;
while (start <= end) {
unsigned long long current = start;
unsigned long long sequence = 1;
while (current != 1) {
sequence += 1;
if(current % 2 == 0)
current = current / 2;
else {
current = (current * 3) + 1;
}
if (current > highest) highest = current;
}
if (sequence > longest) {
longest = sequence;
identifier = start;
printf("thread %lu, number %llu with %llu steps to one\n", step, identifier, longest);
}
start += (unsigned long long)THREAD_COUNT;
}
}
Still, the loop seems to be finished. From what I understand, this should be fairly straight forward, still I'm left clueless as to what I'm doing wrong here.
What you're calling step is the index of the loop. It goes from 0 to THREAD_COUNT-1 in your code. Since you assign start to be step, that means your first iteration tries to compute the Colatz sequence starting at zero. That computes 0/2 == 0 and so is an infinite loop.
What you meant to write is:
unsigned long long start = step + 1;
Calling your block size "THREAD_COUNT" is misleading. The question is not how many threads are created (no threads may be created; that's up to the system). The question is how many chunks to divide the work into.
Note that reading and writing to longest and highest on multiple threads without synchronization is undefined behavior, so this code may do surprising things (particularly when optimized). Don't assume it's limited to getting the wrong values in longest and highest. The optimizer is allowed to assume no other thread touches those values while it runs, and can rearrange code dramatically based on that. But that's not the cause of this particular issue.
As Rob Napier said (+1), the reason one thread is “hanging” is because you have the endless loop when supplying zero, not because of any problem with dispatch_apply (called concurrentPerform in Swift).
But, the more subtle issue (and what makes concurrent code a little less “straightforward”) is that this code is not thread-safe. There are “data races”. You are accessing and mutating highest and longest concurrently from multiple threads. I would encourage using Thread Sanitizer (TSAN) when testing concurrent code, which is pretty good at identifying these data races.
E.g., edit your scheme and temporarily turn on the thread-sanitizer:
Then, when you run, it will warn you about the data races:
You can fix these races by synchronizing your access to these variables. A lock is one simple mechanism. I would also avoid doing a synchronization within the inner while loop, if you can. In this case, you can even remove it from the outer while loop, too. In this case, I might suggest a local variables to keep track of the current “longest” sequence, the “highest” value, and the identifier for that highest value, and then only compare to and update the shared variable when you are done, outside of both loops.
E.g. perhaps:
- (void)three_n_plus_one:(unsigned long) step {
unsigned long long start = step + 1;
unsigned long long end = 1000000;
unsigned long long tempHighest = start;
unsigned long long tempLongest = 1;
unsigned long long tempIdentifier = start;
while (start <= end) {
unsigned long long current = start;
unsigned long long sequence = 1;
while (current != 1) {
sequence += 1;
if (current % 2 == 0)
current = current / 2;
else {
current = (current * 3) + 1;
}
if (current > tempHighest) tempHighest = current;
}
if (sequence > tempLongest) {
tempLongest = sequence;
tempIdentifier = start;
}
start += (unsigned long long)THREAD_COUNT;
}
[lock lock]; // synchronize updating of shared memory
if (tempHighest > highest) {
highest = tempHighest;
}
if (tempLongest > longest) {
longest = tempLongest;
identifier = tempIdentifier;
}
[lock unlock];
}
I used a NSLock, but use whatever synchronization mechanism you want. But the idea is (a) to make sure to synchronize all interaction with shared memory and; (b) to reduce the necessary number of synchronizations to a bare minimum. (In this case, a naïve synchronization approach was 200× slower than the above, which minimizes the number of synchronizations to the bare minimum.)
When you are done fixing the data races, you can then turn TSAN off.

UART Transmit failing after UART Receive thread starts in STM32 HAL Library

I am using STM32F1 (STM32F103C8T6) in order to develop a project using FreeRTOS.
The following is my GPIO and USART1 interface configuration:
__GPIOA_CLK_ENABLE();
__USART1_CLK_ENABLE();
GPIO_InitTypeDef GPIO_InitStruct;
GPIO_InitStruct.Pin = GPIO_PIN_9;
GPIO_InitStruct.Mode = GPIO_MODE_AF_PP;
GPIO_InitStruct.Speed = GPIO_SPEED_HIGH;
HAL_GPIO_Init(GPIOA, &GPIO_InitStruct);
GPIO_InitStruct.Pin = GPIO_PIN_10;
GPIO_InitStruct.Mode = GPIO_MODE_INPUT;
GPIO_InitStruct.Pull = GPIO_NOPULL;
HAL_GPIO_Init(GPIOA, &GPIO_InitStruct);
huart1.Instance = USART1;
huart1.Init.BaudRate = 9600;//115200;
huart1.Init.WordLength = UART_WORDLENGTH_8B;
huart1.Init.StopBits = UART_STOPBITS_1;
huart1.Init.Parity = UART_PARITY_NONE;
huart1.Init.Mode = UART_MODE_TX_RX;
huart1.Init.HwFlowCtl = UART_HWCONTROL_NONE;
HAL_UART_Init(&huart1);
HAL_NVIC_SetPriority(USART1_IRQn, 0, 0);
HAL_NVIC_EnableIRQ(USART1_IRQn);
The question is: Why does UART transmit work before threads start but not after threads started or from threads? I want to transmit data from threads. i.e
int main(void)
{
Initializations();
//THIS WORKS!!
uart_transmit_buffer[0] = 'H';
uart_transmit_buffer[1] = 'R';
uart_transmit_buffer[2] = '#';
uint8_t nums_in_tr_buf = 0;
nums_in_tr_buf = sizeof(uart_transmit_buffer)/sizeof(uint8_t);
state = HAL_UART_Transmit(&huart1, uart_transmit_buffer, nums_in_tr_buf, 5000);
StartAllThreads();
osKernelStart();
for (;;);
}
static void A_Random_Thread(void const *argument)
{
for(;;)
{
if (conditionsMet()) //Executed once when a proper response received.
{
//BUT NOT THIS :(!!
uart_transmit_buffer[0] = 'H';
uart_transmit_buffer[1] = 'R';
uart_transmit_buffer[2] = '#';
uint8_t nums_in_tr_buf = 0;
nums_in_tr_buf = sizeof(uart_transmit_buffer)/sizeof(uint8_t);
state = HAL_UART_Transmit(&huart1, uart_transmit_buffer, nums_in_tr_buf, 5000);
}
}
}
I have made sure no thread is in deadlock. The problem is UART_HAL_Transmit gives HAL_BUSY state.
Furthermore, I have dedicated one thread to receiving and parsing information from UART RX and I suspect this might be a cause of the problem. The following is the code:
static void UART_Receive_Thread(void const *argument)
{
uint32_t count;
(void) argument;
int j = 0, word_length = 0;
for (;;)
{
if (uart_line_ready == 0)
{
HAL_UART_Receive(&huart1, uart_receive_buffer, UART_RX_BUFFER_SIZE, 0xFFFF);
if (uart_receive_buffer[0] != 0)
{
if (uart_receive_buffer[0] != END_OF_WORD_CHAR)
{
uart_line_buffer[k] = uart_receive_buffer[0];
uart_receive_buffer[0] = 0;
k++;
}
else
{
uart_receive_buffer[0] = 0;
uart_line_ready = 1;
word_length = k;
k = 0;
}
}
}
if (uart_line_ready == 1)
{
//osThreadSuspend(OLEDThreadHandle);
for (j = 0; j <= word_length; j++)
{
UART_RECEIVED_COMMAND[j] = uart_line_buffer[j];
}
for (j = 0; j <= word_length; j++)
{
uart_line_buffer[j] = 0;
}
uart_line_ready = 0;
RECEIVED_COMMAND = ParseReceivedCommand(UART_RECEIVED_COMMAND);
if (RECEIVED_COMMAND != _ID_)
{
AssignReceivedData (word_length); //Results in uint8_t * RECEIVED_DATA
}
//osThreadResume(OLEDThreadHandle);
}
//Should be no delay in order not to miss any data..
}
}
Another cause to the problem I suspect could be related to interrupts of the system (Also please notice initialization part, I configured NVIC):
void USART1_IRQHandler(void)
{
HAL_UART_IRQHandler(&huart1);
}
Any help or guidance to this issue would be highly appreciated. Thanks in advance.
From what I can see HAL_UART_Transmit would've worked with the F4 HAL (v1.4.2) if it weren't for __HAL_LOCK(huart). The RX thread would lock the handle and then the TX thread would try to lock as well and return HAL_BUSY. HAL_UART_Transmit_IT and HAL_UART_Receive_IT don't lock the handle for the duration of the transmit/receive.
Which may cause problems with the State member, as it is non-atomically updated by the helper functions UART_Receive_IT and UART_Transmit_IT. Though I don't think it would affect operation.
You could modify the function to allow simultaneous RX and TX. You'd have to update this every time they release a new version of the HAL.
The problem is that the ST HAL isn't meant to be used with an RTOS. It says so in the definition of the macro __HAL_LOCK. Redefining it to use the RTOS's mutexes might worth trying as well. Same with HAL_Delay() to use the RTOS's thread sleep function.
In general though, sending via a blocking function in a thread should be fine, but I would not receive data using a blocking function in a thread. You are bound to experience overrun errors that way.
Likewise, if you put too much processing in the receive interrupt you might also run into overrun errors. I prefer using DMA for reception, followed by interrupts if I've run out of DMA streams. The interrupt only copies the data to a buffer, similarly to the DMA. A processRxData thread is then used to process the actual data.
When using FreeRTOS, you have to set interrupt priority to 5 or above, because below 5 is reserved for the OS.
So change your code to set the priority to:
HAL_NVIC_SetPriority(USART1_IRQn, 5, 0);
The problem turned out to be something to do with blocking statements.
Since UART_Receive_Thread has HAL_UART_Receive inside and that is blocking the thread until something is received, that results in a busy HAL (hence, the HAL_BUSY state).
The solution was using non-blocking statements without changing anything else.
i.e. using HAL_UART_Receive_IT and HAL_UART_Transmit_IT at the same time and ignoring blocking statements worked.
Thanks for all suggestions that lead to this solution.

Runtime Error With Interrupt Timer on Atmega2560

I'm trying to make a loop execute regularly every 50 milliseconds on an Atmega 2560. Using a simple delay function won't work, because the total loop time ends up being the time it took to execute the other functions in the loop, plus your delay time. This works even less well if your functions calls take variable time, which they usually will.
To solve this, I implemented a simple timer class:
volatile unsigned long timer0_ms_tick;
timer::timer()
{
// Set timer0 registers
TCCR0A = 0b00000000; // Nothing here
TCCR0B = 0b00000000; // Timer stopped, begin function start by setting last three bits to 011 for prescaler of 64
TIMSK0 = 0b00000001; // Last bit to 1 to enable timer0 OFV interrupt enable
sei(); // Enable global interrupts
}
void timer::start()
{
timer0_ms_tick = 0;
// Set timer value for 1ms tick (2500000 ticks/sec)*(1 OFV/250 ticks) = 1000OVF/sec
// 256ticks - 250ticks - 6 ticks, but starting at 0 means setting to 5
TCNT0 = 5;
// Set prescaler and start timer
TCCR0B = 0b00000011;
}
unsigned long timer::now_ms()
{
return timer0_ms_tick;
}
ISR(TIMER0_OVF_vect)
{
timer0_ms_tick+=1;
TCNT0 = 5;
}
The main loop uses this like so:
unsigned long startTime, now;
while(true)
{
startTime = startup_timer.now_ms();
/* Loop Functions */
// Wait time step
now = startup_timer.now_ms();
while(now-startTime < 50)
{
now = startup_timer.now_ms();
}
Serial0.print(ltoa(now,time_string, 10));
Serial0.writeChar('-');
Serial0.print(ltoa(startTime,time_string, 10));
Serial0.writeChar('=');
Serial0.println(ltoa(now-startTime,time_string, 10));
}
My output looks like this:
11600-11550=50
11652-11602=50
11704-11654=50
11756-11706=50
12031-11758=273
11828-11778=50
11880-11830=50
11932-11882=50
11984-11934=50
12036-11986=50
12088-12038=50
12140-12090=50
12192-12142=50
12244-12194=50
12296-12246=50
12348-12298=50
12400-12350=50
12452-12402=50
12504-12454=50
12556-12506=50
12608-12558=50
12660-12610=50
12712-12662=50
12764-12714=50
12816-12766=50
12868-12818=50
12920-12870=50
12972-12922=50
13024-12974=50
13076-13026=50
13128-13078=50
13180-13130=50
13232-13182=50
13284-13234=50
13336-13286=50
13388-13338=50
13440-13390=50
13492-13442=50
13544-13494=50
13823-13546=277
13620-13570=50
It seems to work well most of the time, but every once in a while something odd will happen with the timing values. I think it has something to do with the interrupt, but I'm not sure what. Any help would be greatly appreciated.

pic Interrupt on change doesn't trigger first time

I'm trying to comunicate a pic18f24k50 with an arduino 101. I'm using two lines to establish a synchronized comunication. On every change from low to high of the first line, I read the value from the second line.
I have no problems with the arduino code, my problem is that in the pic, the Interrupt on change triggers on the second change from low to high instead of triggering on the first change. This only happens the first time I send data, after that, it works perfectly (it triggers on the first change and I receive the byte properly). Sorry for my english, I'll try to explain myself better with this image:
Channel 1 is the clock signal, channel2 is the data (Im sending one byte for the moment, with bit values 10101010). Channel 4 is an output I'm changing every time I process a bit. (as you can see, it begins on the second rise of the clock signal instead of the first one). This is captured on the first sent byte, the next ones works ok.
I post the relevant code in the pic here:
This is where I initialize things:
TRISCbits.TRISC6 = 0;
TRISCbits.TRISC1 = 1;
TRISCbits.TRISC2 = 1;
IOCC1 = 1;
ANSELCbits.ANSC2=0;
IOCC2 = 0;
INTCONbits.IOCIE = 1;
INTCONbits.IOCIF = 0;
And this is on the interrupt code:
void interrupt SYS_InterruptHigh(void)
{
if (INTCONbits.IOCIE==1 && INTCONbits.IOCIF==1)
{
readByte();
}
}
void readByte(void)
{
while(contaBits<8)
{
INTCONbits.IOCIE = 0;
INTCONbits.IOCIF = 0;
while (PORTCbits.RC1 != HIGH)
{
}
if (PORTCbits.RC1 == HIGH)
{
LATCbits.LATC6 = !LATCbits.LATC6;
//LATCbits.LATC6 = ~LATCbits.LATC6;
switch (contaBits)
{
case 0:
if (PORTCbits.RC2 == HIGH)
varByte.b0 = 1;
else
varByte.b0 = 0;
break;
case 1:
if (PORTCbits.RC2 == HIGH)
varByte.b1 = 1;
else
varByte.b1 = 0;
break;
case 2:
if (PORTCbits.RC2 == HIGH)
varByte.b2 = 1;
else
varByte.b2 = 0;
break;
case 3:
if (PORTCbits.RC2 == HIGH)
varByte.b3 = 1;
else
varByte.b3 = 0;
break;
case 4:
if (PORTCbits.RC2 == HIGH)
varByte.b4 = 1;
else
varByte.b4 = 0;
break;
case 5:
if (PORTCbits.RC2 == HIGH)
varByte.b5 = 1;
else
varByte.b5 = 0;
break;
case 6:
if (PORTCbits.RC2 == HIGH)
varByte.b6 = 1;
else
varByte.b6 = 0;
break;
case 7:
if (PORTCbits.RC2 == HIGH)
varByte.b7 = 1;
else
varByte.b7 = 0;
break;
}
contaBits++;
}
}//while(contaBits<8)
INTCONbits.IOCIE = 1;
contaBits=0;
}
LATCbits.LATC6 = !LATCbits.LATC6; <-- this is the line corresponding to channel 4.
RC1 is channel 1
and RC2 is channel 2
My question is what am I doing wrong, why on the first sent of bytes the interrupt doesn't triggers on the first change of the line 1?
Thank you.
What are you trying to achieve?
The communication protocol as you're describing is commonly refered to as SPI (Serial Peripheral Interface). You should use the hardware implementation available by PIC/Microchip, if possible, for best performance.
Keep your code documented/formatted/logic
I noticed your code being a little, weird.
Weird code gives weird errors.
First:
You're using interrupts, you have this blocking code in your interrupt. Which isn't nice at all.
Second:
Your "contabits" and "varByte" variable come out of nowhere, probably they're global. Which might not be the best practice.
Though, if you're using interrupts, and when it might make sense to transfer the variables to your main program by using a global. The global should be volatile.
Third:
That case switch is just 8x the same code, but a little different.
Also, add some comments to your code.
To be honest
I didn't spot your actual error, has been a long time since I worked with PIC.
But, you should try hardware SPI.
Below is my attempt at software SPI as you described, it might be a little more logical, and takes out any annoyance of the interrupts.
I would recommend replacing the while(CLOCK_PIN_MACRO != 1){}; with for(unsigned int timeout = 64000; timeout > 0; timeout--){};.
So that you can be sure your program won't hang while waiting for an SPI signal that won't come. (Make the timeout something what fits your needs)
#define CLOCK_PIN_MACRO PORTCbits.RC1
#define DATA_PIN_MACRO PORTCbits.RC2
/*
Test, without interrupts.
I would strongly advise to use Hardware SPI or non-blocking code.
*/
unsigned char readByte(void){
unsigned char returnByte = 0;
for(int i = 0; i < 8; i++){ //Do this for bit 0 to 7, to get a complete byte.
while(CLOCK_PIN_MACRO != 1){}; //Wait until the Clock pin goes high.
if(DATA_PIN_MACRO == 1) //Check the data pin.
returnByte = returnByte & (1 << i); //Shift the bit in our byte. (https://www.cs.umd.edu/class/sum2003/cmsc311/Notes/BitOp/bitshift.html)
while(CLOCK_PIN_MACRO == 1){}; //Wait untill the Clock pin goes low again.
}
return returnByte;
}
void setup(){
TRISCbits.TRISC6 = 0;
TRISCbits.TRISC1 = 1;
TRISCbits.TRISC2 = 1;
//IOCC1 = 1;
ANSELCbits.ANSC2=0;
//IOCC2 = 0;
INTCONbits.IOCIE = 1;
INTCONbits.IOCIF = 0;
}
void run(){
unsigned char theByte = readByte();
}
main(){
setup();
while(1){
run();
}
}
You can find the following from the datasheet.
The pins are compared with the old value latched on the last read of
PORTB. The "mismatch" outputs of RB7:RB4 are ORed together to generate
the RB Port Change Interrupt with Flag bit,RBIF (INTCON<0>).
This means that when you read PORTB RB7:RB4 pins, the value of each pin will be stored in an internal latch. Interrupt will be generated only when there is any change in the pin input from the previously latched value.
Then, what will be the default initial value in the latch? i.e. Before we read anything from PORTB.
What ever it is. in your case the default latch value is different from the logic state of your input signal. During the next transition of the signal the latch value and signal level becomes same. So no intterrupt is generated first time.
What you can do is setting the latch value to the initial state of your signal. This can be done by reading PORTB (should be done before enabling the intterrupt).
The below code is enough to simply read the register.
unsigned char ch;
ch = PORTB;
Hope this will help.