What are some of the factors involved in multiple near simultaneous interrupts - interrupt

I know that you can either disable interrupts while the interrupt is going to be processed or you can use a priority scheme. But what happens when the 2 are too close to one another

Even if, supposedly theoretically, multiple interrupts occur at the same time physically, interrupts are sampled at the processor's clock rate and then multiple pending interrupt requests handled according to predefined or preconfigured interrupt priorities.
This is usually done by selecting the one with the topmost priority and keeping the interrupt service routine itself from being interrupted, but some processors takes it furthur and allow storing context for each interrupt and makes lower priority ISRs themselves preemptible by higher priority interrupt requests.

Related

Disable interrupt to let freeRTOS run on stm32

I'm working a project where I am getting digital samples continuously through DMA on STM32f4. DMA generates a complete callback interrupt after every sample where I do some DSP. My plan is to allow freeRTOS to work on other tasks while DMA is waiting on the callback. However, DMA is generating callback too frequently, not allowing freeRTOS to run. I want to make it so that after every DMA complete callback, freeRTOS tasks is allowed to run for 6ms. I thought of calling __disable_irq() from complete callback and __enable_irq() from one of the tasks but that would not guarantee 6ms also I have a high priority button interrupt. I also tried disabling just DMA interrupt calling __set_BASEPRI(priority<<(8-__NVIC_PRIO_BITS)) then starting a timer for 6ms. On timer period elapsed callback in call __set_BASEPRI(0) to enable DMA interrupt. But for some reason this did not allow freeRTOS to run at all. It goes back and forth between DMA complete callback and Timer period elapsed callback.
I am new to embedded programming so any comment on this will help. Thank You.
You should not think of the DSP process being separate from the RTOS tasks, do the DSP in an RTOS task - the signal processing is the most time critical aspect of your system, you have to process the data as fast as it arrives with no loss.
If the DSP is being done in an interrupt context and starving your tasks, then clearly you are doing too much work in the interrupt context, and have too high an interrupt rate. You need to fix your design for something more schedulable.
If your DMA transfers are single samples, you will get one interrupt per sample - the ADC will do that on its own; so using DMA in that manner offers no advantage over direct ADC interrupt processing.
Instead you should use block processing, so you DMA a block of say 80 samples samples cyclically, for which you get a half-transfer interrupt at 40 samples, and full-transfer interrupt at 80 samples. So for each interrupt you might then trigger a task-event or semaphore to defer the DSP processing to a high-priority RTOS task. This achieves two things;
For the entirety of the n sample block acquisition time, the RTOS is free to:
be performing the DSP processing for the previous block,
use any remaining time to process the lower priority tasks.
Any interrupt overhead spent context switching etc. is reduced by 1/n, allowing more time performing core signal processing and background tasks.
Apart form reducing the number of interrupts and software overhead, the signal processing algorithms themselves can be optimised more readily when performing block-processing.
A variation on the above is rather then triggering a task event or semaphore from the DMA interrupt handler, you could place the new sample block in a message queue, which will then provide some buffering. This is useful if the DSP processing might be less deterministic, so cannot always guarantee to complete processing of one block before the next is ready. However overall it remains necessary that on average you complete block processing in the time it takes to acquire a block, with time to spare for other tasks.
If your lower priority tasks are still starved, then the clear indication is that your DSP process is simply too much for your processor. There may be scope for optimisation, but that would be a different question.
Using the suggested block-processing strategy I have in the past migrated an application from a TI C2000 DSP running at 200MHz and 98% CPU load, to a 72MHz STM32F1xx at 60% CPU load. The performance improvement is potentially very significant if you get it right.
With respect to your "high-priority" button interrupt, I would question your priority assignment. Buttons are operated manually with human response and perception times measured in 10's or even 100's of milliseconds. That is hardly your time critical task, whereas missing an ADC sample of a few microseconds would cause your signal processing to go seriously awry.
You may be making the mistake of confusing "high-priority" with "important". In the context or a real-time system, they are not the same thing. You could simply poll the button in a low-priority task, or if you use an interrupt, the interrupt should do no more than signal a task (or more realistically trigger a de-bounce timer) (see Rising edge interrupt triggering multiple times on STM32 Nucleo for example).

Handle interrupts with same priority

If I have a set of peripherals in an AVR microcontroller with equal priority, does the microcontroller use round-robin as a suitable arbitration mechanism for interrupting the sub system?
Or else, how can it manage interrupts with the same priority that happen at same time?
It depends.
For example "classical" AVR microcontrollers have simple one-level interrupt controller. That means, when interrupt is running, interrupt flag in SREG is cleared, thus blocking any other interrupt from running. IRET instruction enables this flag back again, and
after one instruction from the main code is executed, next interrupt is ready to be executed.
When several interrupt requests are asserted simultaneously, then only the one with the lowest interrupt vector address is chosen.
For example, refer to the ATMega328P datasheet (section 6.7 Reset and Interrupt Handling, page 15):
The lower the address the higher is the priority level.
Thus, if interrupt request flag is not cleared, or reasserted before return of the interrupt handler, the same interrupt will run again, and interrupt handlers with higher interrupt vector addresses might be never executed.
But in the newest versions of the architecture there is a more advanced interrupt controller, which allows to enable Round Robin scheduling, and assign to one of the interrupts a higher level (allowing it to be executed even if another interrupt handler is running).
For example in ATmega3208 (refer to the datasheet, section 12. CPU Interrupt Controller):
All interrupt vectors other than NMI are assigned to priority level 0 (normal) by default. The user may override this by assigning one of these vectors as a high priority vector. The device will have many normal priority vectors, and some of these may be pending at the same time. Two different scheduling schemes are available to choose which of the pending normal priority interrupts to service first: Static and round robin
So, the answer is: carefully read the datasheet on the part you're working with.
Section 9 of the ATmega328PB datasheet is entitled "AVR CPU Core" and it says:
All interrupts have a separate interrupt vector in the interrupt vector table.
The interrupts have priority in accordance with their interrupt vector position. The lower the interrupt vector address, the higher the priority.

How would an ISR know what pin cause the interrupt?

Interrupts can be enabled for a specific pin(s) on a digital I/O port, correct? How would the ISR determine which pin caused the interrupt?
Because the vector table has only one slot for the Port1 ISR. So the same ISR function gets called no matter which input pin on Port1 needs attention unless I'm wrong...
As other people have suggested in comments this can be MCU dependent, but for ARM(The core behind MSP432) generally the answer is it doesnt know, it looks for it.
ARM has a vectored interrupt system, which means that every source has its own vector of interrupt, so CPU can easily find out which source is triggering thr interrupt. so far so good.
but then it happens that a device can trigger multiple interrupts, like GPIO as you said, in this case, CPU knows that which port has triggered interrupt so fires it's ISR but then it is ISR responsibility to poll device registers to figure out exact interrupt source, there are many of this peripherals with multiple interrupt, timers, DMAs just to name a few.
This is exactly why normally peripherals have an interrupt enable bit, that lets them trigger interrupts, but they also have bit masks that controls what exactly can trigger that interrupt internally,
Also have a look at this link for an in action example, specially at their ISR that does exactly the same as described above
In a typical MCU, there are hundreds, or at a stretch even thousands of potential interrupt sources. Depending on the application, only some will be important, and even fewer will be genuinely timing critical.
For a GPIO port, you typically enable only the pins which are interesting to generate an interrupt. If you can arrange only one pin of a port to be generating the interrupt, the job is done, your handler for that port can do the work, safely knowing that it will only be called when the right pin is active.
When you care about the cause within a single peripheral, and don't have the luxury of individually vectored handlers, you need to fall back on the 'non vectored' approach, and check the status registers before working out which eventual handler function needs to be called.
Interestingly, you can't work out which pin caused the interrupt - all you can see is which pins are still active once you get round to polling the status register. If you care about the phasing between two pulses, you may not be able to achieve this discrimination within a single GPIO unless there is dedicated hardware support. Even multiple exception vectors wouldn't help, unless you can be sure that the first exception is always taken before the second pin could become set.

PIC32, Differences between Interrupts

What is the difference between INTDisableInterrupts() and INTEnableSystemMultiVectoredInt() and asm volatile ("di")
In Pic32, there are "normal" interrupts and "Vectored" interrupts. If you aren't familliar with Pic32, "vectored" means that each interrupt has it's own interrupt handler function. You can have a function for UART interrupt and another function for RS232 (UART),...
You do not have to put everything in a 'high priority' and a 'low priority' interrupt anymore.
So :
INTDisableInterrupts() will simply disable the interrupts. This will call "di".
"di" : simply disables the interrupts, in assmebler.
INTEnableSystemMultiVectoredInt() will let tell your PIC32 to use a different function for all your interrupts. If you did not provide interrupt handler functions for each of your interrupts, then it will seem as if they are disabled. Your interrupts are NOT disabled however, and if you write an handler for an Vectored interrupt, your pic will use it.
UPDATE:
#newb7777
To answer your question :
If you have only one interrupt ( not vectored ), then you have one big function that must check all the "Interrupt Flag register" to know what caused the interrupt and process the right code.
If you have 'vectored interrupts', then the PIC behaves like most processors ( they almost all have vectored interrupts ). When something happened that would generate an interrupt then a register changes value. For instance one that would be called "UART_1_Rx_Received". Before executing an instruction, the processor sees that this flag is on and if the 'Interrupt enable register' and the 'global interrupt enable register' are both ON, then the interrupt function will be called. Note that all interrupts also have a priority. If a high-priority interrupt is running then it will never be interrupted by an interrupt with <= priority. If a low priority interrupt is running then a higher priority interrupt could interrupt it.
However, you should not lose interrupts because if a byte comes from the UART that would generate a low-priority interrupt and a higher-priority interrupt is running, then the flag will still be set. When the higher priority interrupt ends, then the lower priority will be executed.
Why do we disable interrupts then ? The main reasons to disable interrupts are:
- the interrupt changes the value of a variable. if the code loops :
for(i=0;i==BufferSize;i++)
and your interrupt changes the value of BufferSize while this loop executes, then the loop could execute forever (if BufferSize changes from 100 to 2 while I has the value 99 then I will not get back to 2 for a long time...). You may want to disable interrupt before doing the loop in that case.
Another reason could be that you want to execute something where timing is important.
another reason is that sometimes, MCU needs you to execute a few instructions in a specific order to unlock something that would be dangerous to execute by error so you don't want an interrupt in the middle of the process.
If you have a circular buffer that received bytes from an interrupt and you pool that buffer from the code then you want to make sure to disable interrupts before removing a variable from the buffer to make sure the variables don't change while you read them.
There are many reasons to disable interrupts, just keep in mind that you can also create a "volatile" variable for global variables that are used in and outside of interrupts.
One last thing to answer your question : if you get an interrupt for every byte that comes in your UART at 115,200 baud, and you have an interrupt function that takes a long time to execute, then it is possible to miss a byte or two. In that case if you are lucky there is a hardware buffer that allows you to get them but it is possible also that there isn't and you would lose byte in your communication port. Interrupts must always be as short as possible. When possible, set a flag in the interrupt and do the processing in your main loop outside of the interrupt. When you have many interrupt levels, always use high priority for interrupts that could trigger often and low priority if an interrupt processes for a long time.

Passing parameters between interrupt handlers on a Cortex-M3

I'm building a light kernel for a Cortex-M3.
From a high priority interrupt I'd like to invoke some code to run in a lower priority interrupt and pass some parameters along.
I don't want to use a queue to post work to the lower priority interrupt.
I just have a buffer and size to pass to it.
In the proramming manual it says that the SVC interrupt handler is synchronous which presumably means that if you invoke it from an interrupt that's a lower priority than SVC's handler it gets called immediately (the upshot of this being that you can pass parameters to it as though it were a function call (a little like the BIOS calls in MS-DOS)).
I'd like to do it the other way: passing parameters from a high priority interrupt to a lower priority one (at the moment I'm doing it by leaving the parameters in a fixed location in memory).
What's the best way to do this (if at all possible)?
Thanks,
I'm not familiar with the Cortex-M3 architecture, but I'm sure what you need to provide a locking mechanism on the shared memory.
The higher priority interrupt can interrupt the lower priority processing at any time (unless some how you are specifically synchronizing this with hardware and you are gaurenteed this won't happen, but this is probably not the case)
The locking mechanism maybe as simple as a one bit flag, within a critical section(disabling interrupts for the read-modify-write on the flag) to guarantee an atomic exchange on the locking flag.(i.e. the if the lower priority process/interrupt is accessing/updating the locking flag, the higher priority interrupt does come in and change it.) The flag is then the synchronization mechanism for reading and writing to the shared memory space, allowing for both processes to lock out the other while it is accessing the shared resource, without disabling interrupts for an extend time.(I guess if the shared memory access is quick enough, you could just disable interrupts while you access the share memory directly)