How can I get the mean IRQ that using STM32 systems - embedded

I have a question about EXTI0_IRQn
What does it mean "IRQn"?
How can i spell it word by word?
I've tried to get it on Google but I couldn't.

All the manufacturers of microcontroller invent some kind of abbreviation system, unfortunately there is no real standard. But you will get used to it with growing experience, and because of your coming expectations you will quickly interpret such constants in the almost correct sense.
EXTI0_IQRn it made up of multiple parts:
EXT is for "external": The source of this interrupt is an external circuit, which signals its request to interrupt the processor via a GPIO pin.
I is for "interrupt", just to signal that it is talked about interrupts here.
0 is the number of the interrupt line 0. It has its own interrupt vector, if I found the correct reference manual.
I is again "interrupt", and in combination with:
RQ for "request" means the signal that requests the interrupt. This abbreviation "IRQ" for "interrupt request" is really common across many microcontrollers.
n stands most probably for "number". As said above, the STM32 has multiple interrupt vectors, collected in an array of jump target addresses. In C you would call these "function pointers". The symbol EXTI0_IRQn is defined as the index into this array.

Related

Is uint16_t and uint32_t interrupt safe in Cortex M architecture?

I am working on some embedded stuff. I had multiple interrupts possibly working on same data and so I was wondering if uint16_t and uint32_t data types are interrupt safe.
If interrupt is working a uint16/32_t data and is halfway interrupted by another interrupt that is trying to read this data, it will see corrupted data. Is this a possible scenario?
Thanks
To expand on the answer from #DinhQC, all single-result instructions on 16- and 32-bit data types are 'atomic' with respect to interrupts on the Cortex-M as long as the data is properly aligned (and you have to try quite hard to get the C compiler to give you unaligned data, because unaligned accesses are slow and need special treatment). Multiple-result operations like LDM and STM can be interrupted and resumed, on most implementations, but the integrity of each individual 32-bit transfer within the LDM or STM is guaranteed.
The important thing is to understand whether the operations you're performing are single instructions at the machine level or not. If you're incrementing a shared variable, for example, this will take three instructions: a read, a modify, and a write. If an interrupt occurs in between the read and the write, and the interrupt service routine modifies the same variable, this modification will be overwritten when the ISR returns.
The safe way to go is to use some kind of hardware-supported mechanism to enforce atomicity or mutual exclusion over your shared data. There are more powerful, more flexible and faster approaches to mutual exclusion on the Cortex-M than disabling and re-enabling interrupts, though, notably the STREX and LDREX instructions (which are available in C too). Take a look at my answer to this other question for more information.
Cortex-M processors do not corrupt and give your data undefined value. The value will always be deterministic. However, there are many conditions that affect the value of the data in case of interrupts. The uint16/32_t data can be located in the memory, or only inside the processor registers. If in memory, it can be 16/32-bit aligned or not 16/32-bit aligned. The processor, e.g. M0 or M4, and the operation performed on the data, e.g. add or multiply, also matter. All of those will determine whether the instruction used to process the data is atomic or not.
You can find more details in this discussion and this answered by Joseph Yiu.
Generally speaking, if the instruction is atomic (single execution cycle), the interrupt cannot disturb the data operation. However, at your C code level, uint16/32_t data operation may take more than 1 instruction. Therefore, it is hard to guarantee that the program runs as expected. This also applies to uint8_t data. You may wish to disable interrupts before working on the shared data and enable interrupts afterwards. The technique is covered well in this answer (look at point 2).

Interrupt vector table: why do some architectures employ a "jump table" VS an "array of pointers"?

On some architectures (e.g. x86) the Interrupt Vector Table (IVT) is indeed what it says on the tin: a table of vectors, aka pointers. Each vector holds the address of an Interrupt Service Routine (ISR). When an Interrupt Request (IRQ) occurs, the CPU saves some context and loads the vector into the PC register, thus jumping to the ISR. so far so good.
But on some other architectures (e.g. ARM) the IVT contains executable code, not pointers. When an IRQ occurs, the CPU saves some context and executes the vector. But there is no space in between these "vectors", so there is no room for storing the ISR there. Thus each "vector instruction" typically just jumps to the proper ISR somewhere else in memory.
My question is: what are the advantages of the latter approach ?
I would kinda understand if the ISRs themselves had fixed well-known addresses, and were spaced out so that reasonnable IRSs would fit in-place. Then we would save one indirection level, though at the expense of some fragmentation. But this "compact jump table" approach seems to have no advantage at all. What did I miss ?
Some of the reasons, but probably not all of them (I'm pretty much self educated in these matters):
You can have fall through (one exception does nothing, and just goes to the next in the table)
The FIQ interrupt (Fast Interrupt Requests) is the last in the table, and as the name suggest, it's used for devices that need immediate and low latency processing.
It means you can just put that ISR in there (no jumping), and process it as fast as possible. Also, the way FIQ was thought with it's dedicated registers, it allows for optimal implementation of FIQ handlers. See https://en.wikipedia.org/wiki/Fast_interrupt_request
I think it has do with simplifying the processor's hardware.
If you have machine instructions (jump instructions) in the vector interrupt table, the only extra thing the processor has to do when it has to jump to an interrupt handler is to load the address of the corresponding interrupt vector in the PC.
Whereas, if you have addresses in the interrupt vector table, the processor must be able to read the interruption handler start address from memory, and then jump to it.
The extra hardware required to read from memory and writing to a register is more complex than the required to just writing to a register.

How to code ARM interrupt functions in C

I am using arm-none-eabi-gcc toolchain, v 4.8.2, on LinuxMint 17.2 64b.
I am, at hobbyist level, trying to play with a TM4C123G board and its usual features (coding various blinkies, uart things...) but always trying to remain as close to the metal as possible without using other libraries (eg CMSIS...) whenever possible. Also no IDE (CCS, Keil...), just Linux terminal windows, the board and I... All that mostly for education purpose.
The issue : I am stuck trying to implement the usual interrupt functions like :
EnableInt (clearing bit 0, bit I, of special registry PRIMASK) :
CPSIE I
WaitForInt :
WFI
DisableInt :
CPSID I
Eg, I added this function to my .c file for EnableInt :
void EnableInt(void)
{ __asm(" cpsie i\n");
}
... this compiles but the execution does not seem to work properly (in the simplest blinky.c version, I cannot get any LED action once I have called EnableInt() in the C code). The blinky.c code can be found here.
What would be the proper way to write these interrupt routines in a .c file (ideally without using other libraries, but just setting/clearing bits of the appropriate registers...)?
EDIT : removed the bx lr instructions - but EnableInt() does not seem to work any better - still looking for a solution.
EDIT2 : Actually the function EnableInt(), defined as above, is now working. My SysTick_Handler was mapped incorrectly to the Interrupt Vector table in the startup file (while my original problem was the bx lr instructions which I removed in Edit1).
The ARM Cortex-M4 CPU which your Tivia MCU incorporates does basically not require the software environment to take special action for entry/exit the interrupt handler. The only requirement is to use the AAPCS calling standard, which should be the default with gcc if compiling for this CPU.
The CPU is supported by some tightly coupled "core" peripherals provided by ARM. These are standard for most (if not all) Cortex-M3/4 MCUs. MCU vendors can configure some features, but the basic operation is always the same.
To simplify software development, ARM has introduced the CMSIS software standard. This at least consists of some header-files which unify access to the core-peripherals and use of special CPU instructions. Among those are intrinsics to manipulate the special CPU registers like PRIMASK, BASEMASK, OPTION, etc. Another header provides definitions of the core peripherals and functions to manipulate some of them where a simple access is not sufficient.
So, one of these peripherals supports the CPU for interrupt handling: The NVIC (nested vector-interrupt controller). This prioritises interrupts aagains each other and provides the interrupt vector to the CPU which uses this vector to fetch the address of the interrupt handler.
The NVIC also includes enable-bits for all interrupt sources. So, to have an interrupt processed by the CPU, for a typical MCU you have to enable the interrupt in two or three locations:
PRIMASK/BASEMASK in the CPU: last line of defense. These are the global interrupt gates. `PRIMASK is similar to the interrupt-enable bit in the status-register of the smaller CPUs, BASEMASK is part of interrupt-priority resolution (just ignore it for the beginning).
NVIC interrupt-enable bit for each peripheral interrupt source. E.g Timer, UART, SPI, etc. Many peripherals have multiple internal sources tied to this NVIC-line. (e.g UART rx and tx interrupt).
The interrupt-enable bits in the peripheral itself. E.g. UART rx-interrupt, tx interrupt, rxerror interrupt, etc.
Some peripherals might not have internal bits, so the last one might be missing.
To get things working, you should read the Reference Manaul (Family Guide, or similar), then there is often some "porgramming the Cortex-M4" howto (e.g ST has one for the STM32 series). You should also get the documents from ARM (they are available for free download).
Finally you need the CMSIS headers from your MCU vendor (TI here). These should be tailored for your MCU. You might have to provide some `#define's.
And, yes, this is quite some stuff to read. But imo it is worth the effort. Alternatively you might start with a book. There are some out which might be helpful to get the whole picture first (it is really hard to get from the single documents - yet possible).

Z80 Multibyte Commands in IM0

I'm trying just for the fun to design a more complex Z80 CP/M system with a lot of peripheral devices. When reading the documentation I stumbled over an (undocumented?) behaviour of the Z80 CPU, when accepting an interrupt in IM0.
When an interrupt occurs, the Z80 activates M1 and IORQ to signal the external device: "Hey, give me an opcode". All is well if the opcode is rst 00 or something like this. Now the documentation tells, ANY opcode of any command can be given to the cpu, for instance a CALL.
But now comes the undocumented part: "The first byte of a multi-byte instruction is read during the interrupt acknowledge cycle. Subsequent bytes are read in by a normal memory read sequence."
A "normal memory read sequence". How can I determine, if the CPU wants to get a byte from memory or instead the next byte from the device?
EDIT: I think, I found a (good?) solution: I can dectect the start of the interrupt acknowlegde cycle by analyzing IORQ and M1. Also I can detect the next "normal" opcode fetch by analyzing MREQ and M1. This way I can install a flip-flop triggered by these two ANDed signals, i.e. the flip-flop is 1 as long as the CPU reads data from the io-device. This 1 I can use to inhibit the bus drivers to and from the memory.
My intentions? I'm designing an interrupt controller with 8 prioritized inputs in a CPLD. It's registers hold a 16 bit address for each interrupt pin. Just for the fun :-)
My understanding is that the peripheral device is required:
to know how many bytes it needs to feed;
to respond to normal read cycles following the IORQ cycle; and
to arrange that whatever would normally respond to memory read cycles does not do so for the duration.
Also the behaviour was documented by Zilog in an application note, from which your quote originates (presumably uncredited).
In practice I guess 99.99% of IM0 users just use an RST and 99.99% of the rest use a known-size instruction like CALL xxxx.
(also I'm aware of a few micros that effectively guaranteed not to put anything onto the bus during an interrupt cycle, thereby turning IM0 into a synonym of IM1 owing to open collector output).
The interrupt behavior is reasonably documented in the Z80 manual:
Interupt modes, IM2 allows you to supply an 8-bit address to a 16-bit pointer. At least halfway to the desired 16-bit direct address.
How to set the interrupt modes
My understanding is that the M1 + IORQ combination is used since there was no pin left for a dedicated interrupt response. A fun detail is also that the Zilog I/O chips like PIO, SIO, CTC reads the RETI instruction (as the CPU fetches it) to learn that the CPU is ready to accept another interrupt.

How Vector Interrupt Controller(VIC) is used to handle the external interrupts efficiently?

I want to know how VIC can handle the external interrupts efficiently
A little background (you tagged "arm7" so presumably this question isn't about the Cortex NVIC, etc..)
Initially, ARM processors supported 2 types of interrupts: normal interrupts (IRQ) and Fast Interrupts (FIQ). Each peripheral which could interrupt the CPU would either trigger an IRQ or a FIQ. IRQ has a single vector, FIQ has a single vector.
Sometimes the mapping from peripheral to IRQ/FIQ is done in hardware, sometimes it's configurable. But the point is that as soon as you have >2 peripheral interrupts, they have to share an interrupt vector. In other words, if you have 3 interrupt sources, you are guaranteed at least one of IRQ or FIQ will be used by multiple devices. This implies that when you take the interrupt, you have to "poll" (usually hardware registers) to find out "why am I here? who interrupted me?"
The whole idea of the VIC is that each interrupt has its own unique vector, so that when you vector to that interrupt slot, you know exactly who is interrupting you. No polling "OK, who interrupted me?"
There is a lot more information about the ARM VIC (and its many variants) at ARM's site, including configuration info, register definitions, nested/prioritized interrupts, etc. but your question asked specifically about how the VIC handles interrupts efficiently. Describing every detail of its features is way outside the scope of this question.
(I interpreted "efficiently" as "with as little polling/interrogating as possible". Note that prioritized interrupts, also supported by the VIC, reduce the latency of high-priority interrupts, and that might also be considered "more efficient", although I don't really put it in the same category as not having to poll "who interrupted me?")
More info on the Primecell VIC can be found here at ARM's site.