Is it recommended to change interrupt priority setting? I know for example Texas instrument MSP430 board has a hard-wired vector table so it is not possible to change them. Some architectures support static or dynamic priority selection, but as far as I know it is not recommended. What are disadvantages of changing the priorities?
Interrupt priority affects the scheduling and pre-emption of of interrupts; this can be critical in some hard real-time systems with dead-lines in the order of microseconds or less. Since Linux is generally unsuited to such applications in any case, if interrupt priority matters to the correct operation of your application you probably would not use Linux.
Related
I have a program that should be run periodically on a MCU (ex. STM32). For example it should be run at every 1 ms. If I program an 1ms ISR and call my complete program in it, assuming it will not exceed 1ms, is that a good way? Is there any problem that I could be facing? Will it be precise?
The usual answer would be probably that ISRs should be generally kept to a minimum and most of the work should be performed in the "background loop". (Since you are apparently using the traditional "foreground-background" architecture, a.k.a. "main+ISRs").
But ARM Cortex-M (e.g., STM32) has been specifically designed such that ISRs can be written as plain C functions. In that case, working with ISRs is no different than working with any other C code. This includes ease of debugging.
Moreover, ARM Cortex-M comes with the NVIC (Nested Vectored Interrupt Controller). The NVIC allows you to prioritize interrupts and they can preempt each other. This means that you can quite easily build sets of periodic "tasks" (ISRs), which run under the preemptive, priority-based scheduler. Interestingly, this scheduler (implemented in the NVIC hardware) meets all requirements of RMA/RMS (Rate Monotonic Analysis/Scheduling), so you could prove the schedulability of your system. Of course, the ISR "tasks" cannot block internally, but this is not required for RMA/RMS. Also, if you have any shared resources between ISRs running at different priorities (or ISR and the background loop), you need to properly protect the resources by disabling interrupts.
So, your idea of using ISRs as "tasks" makes a lot of sense to me. Your system will be optimal meaning that any other approach would be less efficient. (This includes the use of any kind of RTOS.) Also, this design can be low-power because you could use the "background loop" in main() to put your CPU and peripherals into low-power sleep mode (WFI instruction, etc.) In fact, you can view the "background loop" as the idle "task" in this "hardware-RTOS".
Semaphore disables interrupts and so will this cause other operations like receiving data on SPI to get corrupt?
Disabling interrupts cannot corrupt the data on the hardware interface.
The problem is if the data is received by the hardware peripheral and then the it raises an interrupt to have the processor collect the data then this will be delayed. If it is delayed for too long then potentially more data will have been received. Depending on the peripheral, either the new data or the old data will have to be discarded. Either way stream of data will be incomplete.
In most cases it is difficult to predict or test how long it is safe to disable interrupts for, so if possible it is best to avoid turning interrupts off.
If the peripheral includes a FIFO buffer, then the length of time that it is safe to disable interrupts for may be increased (although still difficult to predict).
Most modern microcontrollers have many ways to avoid disabling interrupts:
A better approach is to have the peripheral transfer the data to memory with DMA, so no interrupt is required at all.
Most modern processor cores provide ways to implement a semaphore do not even need to disable interrupts.
There's no standard way of implementing a semaphore. To disable all interrupts on the MCU is one way to do it, but it's a very poor amateur way of doing so. Because in more complex applications with multiple interrupts, this will make all real-time considerations and calculations a nightmare.
It creates subtle but severe bugs. Particularly when some quack has done so from deep inside some driver code. You import the driver into your project and suddenly previously working code breaks. In particular, be very careful about using various libs provided by silicon vendors - they are often of very poor quality.
There are better ways to do it, including:
Ensuring atomic access of shared variables, which can only be done with inline assembler or C11 _Atomic if supported.
Disabling one specific interrupt for a specific hardware peripheral, if it is possible to do do given the real-time considerations. Then this should be handled by the driver for that hardware peripheral in the form of setter/getter functions.
Use a "poor man's semaphore" in the form of a plain flag variable, by relying on the interrupt mechanism of the MCU blocking all other interrupts while the ISR is executing. Example.
can software interrupts do some of hardware interrupts?
can it detect power failure and things and then rely only on software interrupts?
so then we wont need special hardware like interrupt controllers
While this may be technically possible I doubt you'll end up with a system that's stable or even reliable. Interrupts are especially important as hardware because they, well, interrupt the processing of other tasks asynchronously. This allows physical components at their lowest level to quickly and correctly respond to events.
Let's play out the scenario you mention and imagine a component on the motherboard detecting a power failure. Without an interrupt the best it can do is write to a register or cache. It must then rely on another piece of hardware or even the operating system to check that value. This basically means periodic polling which is not as efficient. Furthermore, if there is currently a large instruction set running that might be hogging the resources necessary to check the value you have no deterministic way of knowing when that check might occur. It could be near-instant, or it could be a second from now. If it's the latter, your computer loses power and shuts down before it can react.
Does the RTOS play a major role or processor play a major role in determining the time for context switch ? What is the percentage of share between these two major players in determining the context switch time .
Can anyone tell with respect to uC/OS-II RTOS ?
I would say both are significant, but it is not really as simple as that:
The actual context switch time is simply a matter of the number of instruction cycles required to perform the switch, like anything in software it may be coded efficiently or it may not. On the other hand, all other things being equal, a processor with a large register set will require more instruction cycles to save the context; but having a large register set may make other code far more efficient.
A processor may also have an architecture that directly supports fast context switching. For example the lowly 8bit 8051 has four duplicate register banks; so a context switch is little more that a register bank switch (so long as you have not more that four threads), and given that Silicon Labs produce 8051 based devices at 100MIPS, that could be very fast indeed!
More sophisticated processors and operating systems may use an MMU to provide thread memory protection, this is an additional context switch overhead but with benefits that may override that. Also of course such processors generally also have high clock rates which helps.
So all in all, the processor speed, the processor architecture, the quality of the RTOS implementation, and the functionality provided by the RTOS may all affect context switch time. But in the end the easiest way to improve switch time is almost certainly to increase the clock rate.
Although it is nice to have more headroom, if context switch time is a make or break issue for your project on any reputable RTOS you should consider the suitability of either your hardware or your design. You should aim toward a design that minimises context switches. For example, if an ADC conversion takes 6us and a context switch takes 20us, then you would do better to busy-wait than to use a conversion-complete interrupt; better yet use DMA transfers to avoid context switches on single data items where possible.
uC/OS-II RTOS is written in C, with some very specific sections(maybe in assembly) for the processor specific handling. The context switching will be part of the sections that are very specific to the processor.
So the context switch time will be very dependent on the processor selected and the specific sections used to adapt uC/OS-II to that processor. I believe all the source code is available so you should be able to see how much source is needed for a context switch. I also think uC/OS-II has callback's that may allow you to add some performance measuring code.
Just to complete on what Clifford was saying, context switching time also depends on the conditions that trigger the context switch, so mainly it depends on the benchmark.
Depending on the RTOS implementation, in some cases it's possible to switch directly to the first waiting process bypassing the scheduler altogether.
This of course gives a huge boost in some benchmarks.
For example we make some benchmark that measures the overhead (in µs) required to deliver a signal and switch to the high-priority process varying the particular kernel configuration and the target architecture:
http://www.bertos.org/discover/context-switch-overhead
I have an idea of how interrupts are handled by a dual core CPU. I was wondering about how interrupt handling is implemented on a board with more than one physical processor.
Is any of the interrupt responsibility determined by the physical board's configuration? Each processor must be able to handle some types of interrupts, like disk I/O. Unless there is some circuitry to manage and dispatch interrupts to the appropriate processor? My guess is that the scheme must be processor neutral, so that any processor and core can run the interrupt handler.
If a core is waiting on a disk read, will that core be the one to run the interrupt handler when the disk is ready?
On x86 systems each CPU gets its own local APIC (Advanced Programmable Interrupt Controller) which are also wired to each other and to an I/O APIC that handles routing device interrupts to the local APICs.
The OS can program the APICs to determine which interrupts get routed to which CPUs (or to let the APICs make that decision).
I imagine that a multi-core CPU would have a local APIC for each core, but I'm honestly not certain about that.
See these links for more details:
http://osdev.berlios.de/pic.html
http://www.microsoft.com/whdc/archive/io-apic.mspx
http://en.wikipedia.org/wiki/Intel_APIC_Architecture
What you're interested in is SMP Processor Affinity. Here is an excellent article about how it is handled in Linux. The Advanced Programmable Interrupt Controller (APIC) is how you manage this in a modern system. Basically, the default would be to all go to processor 0 unless you had an OS that utilized this interface to set things up properly. Also, you don't necessarily want the core that issued a command to wait on a particular interrupt. You want the less loaded cores to receive it.
I already asked this question a while back. Maybe it can offer you some insight :)
how do interrupts in multicore/multicpu machines work
I would say that it would depend on the hardware manufacturer...
However this link makes me believe most are probably handled by the primary processor and/or first core.
Another link