How to code ARM interrupt functions in C - embedded

I am using arm-none-eabi-gcc toolchain, v 4.8.2, on LinuxMint 17.2 64b.
I am, at hobbyist level, trying to play with a TM4C123G board and its usual features (coding various blinkies, uart things...) but always trying to remain as close to the metal as possible without using other libraries (eg CMSIS...) whenever possible. Also no IDE (CCS, Keil...), just Linux terminal windows, the board and I... All that mostly for education purpose.
The issue : I am stuck trying to implement the usual interrupt functions like :
EnableInt (clearing bit 0, bit I, of special registry PRIMASK) :
CPSIE I
WaitForInt :
WFI
DisableInt :
CPSID I
Eg, I added this function to my .c file for EnableInt :
void EnableInt(void)
{ __asm(" cpsie i\n");
}
... this compiles but the execution does not seem to work properly (in the simplest blinky.c version, I cannot get any LED action once I have called EnableInt() in the C code). The blinky.c code can be found here.
What would be the proper way to write these interrupt routines in a .c file (ideally without using other libraries, but just setting/clearing bits of the appropriate registers...)?
EDIT : removed the bx lr instructions - but EnableInt() does not seem to work any better - still looking for a solution.
EDIT2 : Actually the function EnableInt(), defined as above, is now working. My SysTick_Handler was mapped incorrectly to the Interrupt Vector table in the startup file (while my original problem was the bx lr instructions which I removed in Edit1).

The ARM Cortex-M4 CPU which your Tivia MCU incorporates does basically not require the software environment to take special action for entry/exit the interrupt handler. The only requirement is to use the AAPCS calling standard, which should be the default with gcc if compiling for this CPU.
The CPU is supported by some tightly coupled "core" peripherals provided by ARM. These are standard for most (if not all) Cortex-M3/4 MCUs. MCU vendors can configure some features, but the basic operation is always the same.
To simplify software development, ARM has introduced the CMSIS software standard. This at least consists of some header-files which unify access to the core-peripherals and use of special CPU instructions. Among those are intrinsics to manipulate the special CPU registers like PRIMASK, BASEMASK, OPTION, etc. Another header provides definitions of the core peripherals and functions to manipulate some of them where a simple access is not sufficient.
So, one of these peripherals supports the CPU for interrupt handling: The NVIC (nested vector-interrupt controller). This prioritises interrupts aagains each other and provides the interrupt vector to the CPU which uses this vector to fetch the address of the interrupt handler.
The NVIC also includes enable-bits for all interrupt sources. So, to have an interrupt processed by the CPU, for a typical MCU you have to enable the interrupt in two or three locations:
PRIMASK/BASEMASK in the CPU: last line of defense. These are the global interrupt gates. `PRIMASK is similar to the interrupt-enable bit in the status-register of the smaller CPUs, BASEMASK is part of interrupt-priority resolution (just ignore it for the beginning).
NVIC interrupt-enable bit for each peripheral interrupt source. E.g Timer, UART, SPI, etc. Many peripherals have multiple internal sources tied to this NVIC-line. (e.g UART rx and tx interrupt).
The interrupt-enable bits in the peripheral itself. E.g. UART rx-interrupt, tx interrupt, rxerror interrupt, etc.
Some peripherals might not have internal bits, so the last one might be missing.
To get things working, you should read the Reference Manaul (Family Guide, or similar), then there is often some "porgramming the Cortex-M4" howto (e.g ST has one for the STM32 series). You should also get the documents from ARM (they are available for free download).
Finally you need the CMSIS headers from your MCU vendor (TI here). These should be tailored for your MCU. You might have to provide some `#define's.
And, yes, this is quite some stuff to read. But imo it is worth the effort. Alternatively you might start with a book. There are some out which might be helpful to get the whole picture first (it is really hard to get from the single documents - yet possible).

Related

QP Port to STM32 and STM32CubeIDE

I need a very simple example of porting QP framework to stm32 micro controller that is based on C language and using the stm32cubeide, im very new to QP and want to learn more.
I've started with blinky example of qm tool and change the bsp.c to work rightly with HAL function and port it to my blinky stm32f4 project and i have no error and program it in micro but it does not work correctly.
When run to debug findout the TimeEvt (Time Event) doesn't occur and the state transition doesn't happen
The complete STM32Cube software is too big to fit in the QP framework installation. Therefore, the examples for STM32 that ship with the QP framework (for several STM32 NUCLEO and Discovery boards), use only parts of STM32Cube.
So, if you wish to use the complete STM32Cube software, you most likely need to adjust the initialization and interactions with the hardware through the STM32Cube API. This is really confined to the BSP (Board Support Package) in the application and has ​little to do with QP, which takes over after you initialize the board.
So in your case you must ensure that the QF_TICK_X() macro of QP is called periodically at the desired frequency. If this does not happen, the Time Events aren't serviced and they will not be posted to your active objects.
You can easily check if QF_TICK_X() is being called in a debugger by placing a breakpoint at this call. If the breakpoint is never hit, you need to configure the interrupt correctly.
One word of caution about STM32Cube and QP is to ensure that the ARM Cortex-M interrupt priorities are configured correctly. This is because QP disables interrupts only selectively and leaves the "kernel unaware" interrupts completely undisturbed. This is common practice also used in FreeRTOS, for example. Please read the pertinent section in the QP Manual:
https://www.state-machine.com/qpc/arm-cm.html#arm-cm_kernel-aware
STM32Cube is known to mess up with the interrupt priorities (e.g., of SysTick), so you need to make sure that SysTick priority is not changed to the "kernel unaware" level. If STM32Cube sets it to zero, for instance, you need to change it again to some larger priority number (lower priority in the inverted priority scheme used by Cortex-M).

How to implement hardware interrupts in uCOS II and TM4C123G (ARM M4) MCU?

Background:
I am using uCOS II, Keil uVision 5, and a TIVA board with the TM4C123GH6PM MCU on it. I was given a the port for uCOS II as well as a blank project file to get started. I wrote the tasks needed and the program works correctly but now I am interested in implementing interrupts and trying to understand how they can coexist with an RTOS. This is all done in C.
Issue:
Interrupts do not work; they simply don't fire up. There are instances where the other tasks won't execute either. The core issue is that I don't really understand how interrupts can coexist with the RTOS. I've written code (in both assembly and C) on baremetal where interrupts work perfectly and I fully understand how they work when there is no layer in-between the code and the cpu.
What I've Tried:
I read the book and reference manual that came with uCOS-II and searched for ways to implement interrupts. No mentioning whatsoever; the only thing mentioned about interrupts is how they interact with the scheduler so interrupts are only covered in the theoretical domain.
I asked on the micrium (original vendor) forum and no reply/seems like a dead forum
I looked at the libraries included with the uCOS port and found something useful:
bsp_int is the library that deals with the interrupts. BSP stands for Board Support Package and is intended to facilitate the interaction between the software and the code
The library has functions to register an interrupt and enable it. The rtos uses its own table of ISR handlers mapped to the NVIC of the cpu. All handlers are filtered through a generic handler. The two useful functions from this library are:
bsp_intVectSet which takes the interrupt trigger ID (i.e bsp_int_id_gpiof) and a pointer to the interrupt handler and registers it
bsp_intEn which takes the interrupt ID and enables it
The bsp_int library is included in bsp.c which calls the initialization function (from bsp_int) for interrupts (bsp_IntInit())
The bsp.h file is included in the main application file (app.c)
app.c main is the entry point of the program. The main disables interrupt, initializes uCOS (i.e the kernel) creates the first/starting task called AppTaskStart, and starts multitasking (i.e gives control to the rtos and the function never returns). I'm assuming the kernel reenables interrupts since it needs those to run
So the way the rtos works (to my understanding) is that it hijacks the systick timer so at every clock tick, the kernel is called and is able to schedule the tasks.
AppTaskStart, which is the very first task to execute within the kernel domain, calls bsp_init (in which, bsp_IntInit is called to initialize the interrupt table and more) and performs other initialization tasks
The way I've set up interrupts without a kernel before, was using the Tivaware library (in C) provided by TI. It has functions for creating interrupts, specifying the trigger (i.e rising/falling edge, timer overflow, etc.), and enabling them. This method works and I thought is what I should be using to set up the interrupt I want
So I used the tivaware library to set up interrupts on one of the gpio ports (to which, mechanical switches are connected) on the rising edge. The code for this, as well as other code to start the port f peripheral, set the switches pins to input, and enable pull-ups, is included in bsp_init (bsp.c) which is called from AppTaskStart which is called from main. So far everything works perfectly, the rtos initiliazes, and all its tasks execute accordingly. When I try to move the code directly to the main and flash the program onto the board, the rtos initializes (leds blink) but then the tasks don't execute. Any ideas why that might be?
If I add the code to also enable and register the interrupt for when the switch is closed in the same function, using code from the tivaware library, the rtos does not initialize.
Do I need to setup/register/enable interrupts using the tivaware library as well as register and enable them using the board support package (bsp) library? The way I understand this so far is that the bsp is registering/enabling interrupts for the kernel only whereas the tivaware code is enabling them by directly writing to the registers so the latter is needed to setup the cpu portion of the interrupts and the former is needed to setup the OS portion of the interrupts. But I don't know. I really don't understand how they've designed incorporating interrupts under uCOS II. They do specify how the interrupt handler should be written and what macros to use but nothing else.
What should I try next? Does anybody have any experience with working with these two components (the rtos and the board)?
I am just stuck at this point and I've been playing with the code, moving stuff around, trying to find a clue/lead to solving this issue. I can't even debug the rtos because uVision does not support uCOS and I can't use step-debugging because interrupts are firing at every clock-tick and the PC is being changed constantly so the IDE can't follow it.
I know IAR Embedded Workbench has support for uCOS-II and I have the app on my laptop and I tried setting up a project but I was only given a port/starter project for Keil and I don't know how to set one up for IAR EW. The only ports on Micrium's website are for the TM4C129 series and I tried using that to start an IAR EW project but I couldn't get it to work (libraries not being linked/missing files).
Thank You!
Does anybody have any experience with working with these two components (the rtos and the board)?
I'm afraid I haven't worked with uCos (but with other OSes, mainly SysBios and FreeRTOS) and I haven't worked with Tiva (but with Sitara AM335x) yet. Still, I think some hints below may be helpful for you (and apply in spite of the different implementations you are using).
What should I try next?
These are the steps I recommend you to consider. You can put them into any order you find most helpful.
interrupt priorities of ISRs that call RTOS library APIs must not be higher than the level that is taken into account by RTOS, otherwise the RTOS-internal states may get corrupted, and anything can happen. Please check your OS documentation.
Please verify the position of the interrupt vector table and its contents:
Does every vector table entry point to one of the ISR wrapper handlers provided by the RTOS, or do you also find "independent" ISR implementations? If so, what do the latter do?
If you find pointers into third-party libraries you don't have the code for, don't give up. These can be as important...
Even more important than including the right header to bsp_Int... APIs is that interrupt management of all software components runs through one unique API, e.g., the bsp_Int... one.
Your assumptions about app.c/main() sound reasonable. Please make sure that you also know about every component that accesses interrupts indirectly.
AppTaskStart, which is the very first task to execute within the kernel domain, calls bsp_init (in which, bsp_IntInit is called to initialize the interrupt table and more) and performs other initialization tasks
Please check what happens if you place a breakpoint at the top of every task function. Then you should be able to watch all tasks start and run into its breakpoint once.
The way I've set up interrupts without a kernel before, was using the Tivaware library (in C) provided by TI. It has functions for creating interrupts, specifying the trigger (i.e rising/falling edge, timer overflow, etc.), and enabling them. This method works and I thought is what I should be using to set up the interrupt I want
You should make sure that the Tivaware library only uses interrupts in a way that is compatible to your RTOS. You can do this by RTM or reading the sources.
So I used the tivaware library to set up interrupts on one of the gpio ports (to which, mechanical switches are connected) on the rising edge. The code for this, as well as other code to start the port f peripheral, set the switches pins to input, and enable pull-ups, is included in bsp_init (bsp.c) which is called from AppTaskStart which is called from main. So far everything works perfectly, the rtos initiliazes, and all its tasks execute accordingly. When I try to move the code directly to the main and flash the program onto the board, the rtos initializes (leds blink) but then the tasks don't execute. Any ideas why that might be?
Could it be that an electronic problem at one of the controller pins connected to the interrupt starts triggering that interrupt all the time?
If I add the code to [...]
Have you tried creating a
minimal reproducible example?
When you do this, you can enhance the effect by simultaneously performing
rubber duck debugging.
Do I need to setup/register/enable interrupts using the tivaware library as well as register and enable them using the board support package (bsp) library? The way I understand this so far is that the bsp is registering/enabling interrupts for the kernel only whereas the tivaware code is enabling them by directly writing to the registers so the latter is needed to setup the cpu portion of the interrupts and the former is needed to setup the OS portion of the interrupts. But I don't know. I really don't understand how they've designed incorporating interrupts under uCOS II. They do specify how the interrupt handler should be written and what macros to use but nothing else.
This sounds dangerous. I haven't worked with Tiva yet, but instead with another TI chip (AM335x). There we had a similar situation, different libraries accessing different/overlapping parts of the same system resource by means of different abstraction layers. Situation only started improving when we tidied up the mess of abstraction layers ignoring each other and ported some code to a common abstraction layering scheme.
And some PS:
You can write your ISRs in C or assembler, as you like. Depending on the quality of your toolchain and optimisation settings, assember may yield better performance (or not at all), and by calling C APIs from assembler, some programmers tend to make new mistakes. I'd recommend to stay within C until you know in detail what is happening around your OS and IRQs.

How does machine code communicate with processor?

Let's take Python as an example. If I am not mistaken, when you program in it, the computer first "translates" the code to C. Then again, from C to assembly. Assembly is written in machine code. (This is just a vague idea that I have about this so correct me if I am wrong) But what's machine code written in, or, more exactly, how does the processor process its instructions, how does it "find out" what to do?
If I am not mistaken, when you program in it, the computer first "translates" the code to C.
No it doesn't. C is nothing special except that it's the most widespread programming language used for system programming.
The Python interpreter translates the Python code into so called P-Code that's executed by a virtual machine. This virtual machine is the actual interpreter which reads P-Code and every blip of P-Code makes the interpreter execute a predefined codepath. This is not very unlike how native binary machine code controls a CPU. A more modern approach is to translate the P-Code into native machine code.
The CPython interpreter itself is written in C and has been compiled into a native binary. Basically a native binary is just a long series of numbers (opcodes) where each number designates a certain operation. Some opcodes tell the machine that a defined count of numbers following it are not opcodes but parameters.
The CPU itself contains a so called instruction decoder, which reads the native binary number by number and for each opcode it reads it gives power to the circuit of the CPU that implement this particular opcode. there are opcodes, that address memory, opcodes that load data from memory into registers and so on.
how does the processor process its instructions, how does it "find out" what to do?
For every opcode, which is just a binary pattern, there is its own circuit on the CPU. If the pattern of the opcode matches the "switch" that enables this opcode, its circuit processes it.
Here's a WikiBook about it:
http://en.wikibooks.org/wiki/Microprocessor_Design
A few years ago some guy built a whole, working computer from simple function logic and memory ICs, i.e. no microcontroller or similar involved. The whole project called "Big Mess o' Wires" was more or less a CPU built from scratch. The only thing nerdier would have been building that thing from single transistors (which actually wasn't that much more difficult). He also provides a simulator which allows you to see how the CPU works internally, decoding each instruction and executing it: Big Mess o' Wires Simulator
EDIT: Ever since I originally wrote that answer, building a fully fledged CPU from modern, discrete components has been done: For your considereation a MOS6502 (the CPU that powered the Apple II, Commodore C64, NES, BBC Micro and many more) built from discetes: https://monster6502.com/
Machine-code does not "communicate with the processor".
Rather, the processor "knows how to evaluate" machine-code. In the [widespread] Von Neumann architecture this machine-code (program) can be thought of as an index-able array of where each cell contains a machine-code instruction (or data, but let's ignore that for now).
The CPU "looks" at the current instruction (often identified by the PC or Program Counter) and decides what to do (this can either be done directly with transistors/"bare-metal", or it be translated to even lower-level code): this is known as the fetch-decode-execute cycle.
When the instructions are executed side-effects occur such as setting a control flag, putting a value in a register, or jumping to a different index (changing the PC) in the program, etc. See this simple overview of a CPU which covers the above a little bit better.
It is the evaluation of each instruction -- as it is encountered -- and the interaction of side-effects that results in the operation of a traditional processor.
(Of course, modern CPUs are very complex and do lots of neat tricky things!)
That's called microcode. It's the code in the CPU that reads machine code instructions and translate that into low level data flow.
When the CPU for example encounters the add instruction, the microcode describes how it should get the two values, feed them to the ALU to do the calculation, and where to put the result.
Electricity. Circuits, memory, and logic gates.
Also, I believe Python is usually interpreted, not compiled through C → assembly → machine code.

On reset what happens in embedded system?

I have a doubt regarding the reset due to power up:
As I know that microcontroller is hardwired to start with some particular memory location say 0000H on power up. At 0000h, whether interrupt service routine is written for reset(initialization of stack pointer and program counter etc) or the reset address is there at 0000h(say 7000) so that micro controller jumps at 7000 address and there initialization of stack and PC is written.
Who writes this reset service routine? Is it the manufacturer of microcontroller chip(Intel or microchip etc) or any programmer can change this reset service routine(For example, programmer changed the PC to 4000h from 7000h on power up reset resulting into the first instruction to be fetched from 4000 instead of 7000).
How the stack pointer and program counter are initialized to the respective initial addresses as on power up microcontroller is not in the state to put the address into stack pointer and program counter registers(there is no initialization done till reset service routine).
What should be the steps in the reset service routine considering all possibilities?
With reference to your numbering:
The hardware reset process is processor dependent and will be fully described in the data sheet or reference manual for the part, but your description is generally the case - different architectures may have subtle variations.
While some microcontrollers include a ROM based boot-loader that may contain start-up code, typically such bootloaders are only used to load code over a communications port, either to program flash memory directly or to load and execute a secondary bootloader to RAM that then programs flash memory. As far as C runtime start-up goes, this is either provided with the compiler/toolchain, or you write it yourself in assembler. Normally even when start-up code is provided by the compiler vendor, it is supplied as source to be assembled and linked with your application. The compiler vendor cannot always know things like memory map, SDRAM mapping and timing, or processor clock speed or what oscillator crystal is used in your hardware, so the start-up code will generally need customisation or extension through initialisation stubs that you must implement for your hardware.
On ARM Cortex-M devices in fact the initial PC and stack-pointer are in fact loaded by hardware, they are stored at the reset address and loaded on power-up. However in the general case you are right, the reset address either contains the start-up code or a vector to the start-up code, on pre-Cortex ARM architectures, the reset address actually contains a jump instruction rather than a true vector address. Either way, the start-up code for a C/C++ runtime must at least initialise the stack pointer, initialise static data, perform any necessary C library initialisation and jump to main(). In the case of C++ it must also execute the constructors of any global static objects before calling main().
The processor cores normally have as you say a starting address of some sort of table either a list of addresses or like ARM a place where instructions are executed. Wrapped around that core but within the chip can vary. Cores that are not specific to the chip vendor like 8051, mips, arm, xscale, etc are going to have a much wider range of different answers. Some microcontroller vendors for example will look at strap pins and if the strap is wired a certain way when reset is released then it executes from a special boot flash inside the chip, a bootloader that you can for example use to program the user boot flash with. If the strap is not tied that certain way then sometimes it boots your user code. One vendor I know of still has it boot their bootloader flash, if the vector table has a valid checksum then they jump to the reset vector in your vector table otherwise they sit in their bootloader mode waiting for you to talk to them.
When you get into the bigger processors, non-microcontrollers, where software lives outside the processor either on a boot flash (separate chip from the processor) or some ram that is managed somehow before reset, etc. Those usually follow the rule for the core, start at address 0xFFFFFFF0 or start at address 0x00000000, if there is garbage there, oh well fire off the undefined instruction vector, if that is garbage just hang there or sit in an infinite loop calling the undefined instruction vector. this works well for an ARM for example you can build a board with a boot flash that is erased from the factory (all 0xFFs) then you can use jtag to stop the arm and program the flash the first time and you dont have to unsolder or socket or pre-program anything. So long as your bootloader doesnt hang the arm you can have an unbrickable design. (actually you can often hold the arm in reset and still get at it with the jtag debugger and not worry about bad code messing with jtag pins or hanging the arm core).
The short answer: How many different processor chip vendors have there been? There are many different solutions, as many as you can think of and more have been deployed. Placing a reset handler address in a known place in memory is the most common though.
EDIT:
Questions 2 and 3. if you are buying a chip, some of the microcontrollers have this protected bootloader, but even with that normally you write the boot code that will be used by the product. And part of that boot code is to initialize the stack pointers and prepare memory and bring up parts of the chip and all those good things. Sometimes chip vendors will provide examples. if you are buying a board level product, then often you will find a board support package (BSP) which has working example code to bring up the board and perhaps do a few things. Say the beagleboard for example or the open-rd or embeddedarm.com come with a bootloader (u-boot or other) and some already have linux pre-installed. boards like that the user usually just writes some linux apps/drivers and adds them to the bsp, but you are not limited to that, you are often welcome to completely re-write and replace the bootloader. And whoever writes the bootloader has to setup the stacks and bring up the hardware, etc.
systems like the gameboy advance or nds or the like, the vendor has some startup code that calls your startup code. so they may have the stack and such setup for them but they are handing off to you, so much of the system may be up, you just get to decide how to slice up the memorires, where you want your stack, data, program, etc.
some vendors want to keep this stuff controlled or a secret, others do not. in some cases you may end up with a board or chip with no example code, just some data sheets and reference manuals.
if you want to get into this business though you need to be prepared to write this startup code (in assembler) that may call some C code to bring up the rest of the system, then that might start up the main operating system or application or whatever. Microcotrollers sounds like what you are playing with, the answers to your questions are in the chip vendors users guides, some vendors are better than others. search for the word reset or boot in the document to try to figure out what their boot schemes are. I recommend you use "dollar votes" to choose the better vendors. A vendor with bad docs, secret docs, bad support, dont give them your money, spend your money on vendors with freely downloadable, well written docs, with well written examples and or user forums with full time employees trolling around answering questions. There are times where the docs are not available except to serious, paying customers, it depends on the market. most general purpose embedded systems though are openly documented. the quality varies widely, but the docs, etc are there.
Depends completely on the controller/embedded system you use. The ones I've used in game development have the IP point at a starting address in RAM. The boot strap code supplied from the compiler initializes static/const memory, sets the stack pointer, and then jumps execution to a main() routine of some sort. Older systems also started at a fixed address, but you manually had to set the stack, starting vector table, and other stuff in assembler. A common name for the starting assembler file is CRT0.s for the stuff I've done.
So 1. You are correct. The microprocessor has to start at some fixed address.
2. The ISR can be supplied by the manufacturer or compiler creator, or you can write one yourself, depending on the complexity of the system in question.
3. The stack and initial programmer counter are usually handled via some sort of bootstrap routine that quite often can be overriden with your own code. See above.
Last: The steps will depend on the chip. If there is a power interruption of any sort, RAM may be scrambled and all ISR vector tables and startup code should be rewritten, and the app should be run as if it just powered up. But, read your documentation! I'm sure there is platform specific stuff there that will answer these for your specific case.

How are external interrupts vectored on a powerpc processor?

Maybe the question should be, are external interrupts even vectored on the PowerPC at all? I've been looking at http://www.ibm.com/developerworks/eserver/library/es-archguide-v2.html, 'book 3', trying to figure out how the processor locates the appropriate interrupt service routine in response to an external interrupt. It seems to suggest that when the PPC recognizes an external interrupt, it just jumps execution to 0x0000_0500.
I may be laboring under a misconception about how the PPC works. With x86, the processor responds to interrupt requests with an interrupt acknowledge cycle, and obtains a 'vector' directly from the device. The vector (really an index) then allows the cpu to pick an appropriate handler routine from its interrupt vector table. Most importantly, this acknowledge/vector fetch is a hardware, bus-protocol thing, nobody has to write any code to make it happen. The only code that needs writing (read, software) is the ISRs themselves.
Does the PPC do something similar? Would there be a table of vectors at 0x500? Or does it do something radically different, and offload the functionality of getting the device's vector to an external interrupt controller? I suppose it could just jump to code at 0x500, where actual software would then interrogate the (hypothetical?) interrupt controller to get the vector .. and then use it in a jump-table/what-have-you, but I can't find documentation to verify this is the case, one way or another.
The PowerPC CPU has no concept of an interrupt vector table, and only provides a single interrupt pin and interrupt vector.