I'm developing a BSP for VxWorks and got stuck with the following problem:
UART3 TX interrupt enabled and IIR shows PENDING THR interrupt, but the Cortex A-9 GIC doesn't recognize it. Therefore the VxWorks IO buffer gets full but the UART FIFO remains empty.
When I manually set the corresponding interrupt to PENDING in the GIC, it propagates the interrupt and the ISR gets executed, which results in 64bytes sent to the UART FIFO and correctly printed on RS232. Proves that ISR is set up correctly and is functioning and that interrupt is enabled in the GIC distributor.
OMAP4430 features a 16550 compatible UART. Baud rate configurations are fine because I can use it successfully in polled mode.
What are the possible reasons why the UART TX interrupt doesn't get asserted?
EDIT: Found the cause of my problem, it looks like interrupt numbers are shifted by 32 bits:
The OMAP4430 TRM states that UART3 has interrupt line 74, but when I looked at the PENDING registers of the GIC, I saw that an interrupt was pending at line 106. With the fact in mind that 106 is the sum of 74 + 32, I tried to configure interrupt line 106 as the UART3 interrupt number and see there it worked!
The serial driver is the vxbNs16550 from VxWorks and the GIC driver is also from VxWorks and the interrupt lines are physical connections from the UART to the GIC defined by TI. So why is there this discrepancy in the numbering of the physical interrupt lines?
Related
For uart reception, it's pretty obvious to me what can go wrong in case of 'blocking receive' over uart. Even in freertos with a dedicated task to read from uart, context / task switching could result in missing bytes that were received in the uart peripheral.
But for transmission I am not really sure if there is a need for interrupt based approach. I transmit from a task, and in my design it's no problem if that task is blocked for a short while. (it also blocks/sleeps on mutexes e.g).
Is there another strong argument to use use uart transmit in interrupt mode? I am not risking anything wrt loss of data, right?
In my case I use an stm32, but I guess the type of mcu is not really relevant here.
Let's focus on TX only and assume that we don't use interrupts and handle all the transmission with the tools provided by the RTOS.
µC UART hardware generally have a transmit shift register (TSR) and some kind of data register (DR). The software loads the DR, and if the TSR is empty, DR is instantly transferred into TSR and TX begins. The software is free to load another byte into DR, and the hardware loads the new byte from DR to TSR whenever the TX (shift-out) of the previous byte finishes. Hardware provides status bits for querying the status of DR & TSR. This way, the software can using polling method and still achieve continuous transmission with no gaps between the bytes.
I'm not sure if the hardware configuration I described above holds for every µC. I have experience with 8 & 16-bit PICs and STM32 F0, F1, F4 series. They are all similar. UART hardware doesn't provide additional hardware buffers.
Now, back to RTOS... Obviously, your TX task needs to be polling UART status bits. If we assume that UART baud rate is 115200 (which is a common value), you waste ~90 µs of polling for each byte. The general rule of RTOS is that if you are waiting for something to happen, your task needs to be blocked so other tasks can run. But block on what? What will tell you when to unblock? For this you need interrupts. Your task blocks on task notification, (ulTaskNotifyTake()), and the interrupt gives the notification using xTaskNotifyGive().
So, I can't imagine any other way without using interrupts. But, the method mentioned above isn't good either. It makes no sense to block - unblock with each byte.
There are 2 possible solutions:
Move TX handling completely to interrupt handler (ISR), and notify the task when TX is completed.
Use DMA instead! Almost all modern 32-bit µCs have DMA support. DMA generates a single interrupt when the TX is completed. You can notify the task from the DMA transfer complete interrupt.
On this answer I've focused on TX, but using DMA is the proper way of handling reception (RX) too.
If I have a set of peripherals in an AVR microcontroller with equal priority, does the microcontroller use round-robin as a suitable arbitration mechanism for interrupting the sub system?
Or else, how can it manage interrupts with the same priority that happen at same time?
It depends.
For example "classical" AVR microcontrollers have simple one-level interrupt controller. That means, when interrupt is running, interrupt flag in SREG is cleared, thus blocking any other interrupt from running. IRET instruction enables this flag back again, and
after one instruction from the main code is executed, next interrupt is ready to be executed.
When several interrupt requests are asserted simultaneously, then only the one with the lowest interrupt vector address is chosen.
For example, refer to the ATMega328P datasheet (section 6.7 Reset and Interrupt Handling, page 15):
The lower the address the higher is the priority level.
Thus, if interrupt request flag is not cleared, or reasserted before return of the interrupt handler, the same interrupt will run again, and interrupt handlers with higher interrupt vector addresses might be never executed.
But in the newest versions of the architecture there is a more advanced interrupt controller, which allows to enable Round Robin scheduling, and assign to one of the interrupts a higher level (allowing it to be executed even if another interrupt handler is running).
For example in ATmega3208 (refer to the datasheet, section 12. CPU Interrupt Controller):
All interrupt vectors other than NMI are assigned to priority level 0 (normal) by default. The user may override this by assigning one of these vectors as a high priority vector. The device will have many normal priority vectors, and some of these may be pending at the same time. Two different scheduling schemes are available to choose which of the pending normal priority interrupts to service first: Static and round robin
So, the answer is: carefully read the datasheet on the part you're working with.
Section 9 of the ATmega328PB datasheet is entitled "AVR CPU Core" and it says:
All interrupts have a separate interrupt vector in the interrupt vector table.
The interrupts have priority in accordance with their interrupt vector position. The lower the interrupt vector address, the higher the priority.
I want to understand what exactly an interrupt is for my 6502 work-alike processor project in Logisim.
I know that an interrupt does the following steps:
Stops the current program from processing
Saves all unfinished data into the stack
Does "SOMETHING"
Loads back the unfinished data and let's the program keep running normally.
My question is: what happens during that "SOMETHING" step? Does the program counter get redirected to a special program to be executed? Something like reading the pressed button's ASCII code and saving that into a register or some memory location? If so, where is that special program usually stored in the memory? And can you make such a CPU that will handle different kinds of interrupts? Maybe if you press the button "a" then it's ASCII will be stored in A register, but if you press the button "b" then it will be stored in X register?
Any help is greatly appreciated.
Edit: Thanks to everybody for answers. I learned a lot and now can proceed with my project.
My question is: what happens during that "SOMETHING" step? Does the program counter get redirected to a special program to be executed?
What happens with a 6502 maskable interrupt is this:
the interrupt is raised (by this I mean the interrupt pin on the chip is forced low.
when it's time to execute a new instruction, the 6502 checks if the interrupt pin is low and the interrupt mask in the status register is not set. If either is not thew case i.e. if the interrupt pin is high or the interrupt mask is high, the CPU just carries on.
Assuming an interrupt is required, the CPU saves the PC on the stack
The CPU then saves the status register on the stack but with the B bit set to 0. The B bit is the "break" bit. It would be set to 1 for a BRK instruction and that is the only way to tell the difference between a hardware interrupt and a BRK instruction.
The CPU then fetches the address at locations $FFFE and $FFFF and stuffs it into the PC, so execution begins again at that address.
That's all it does. Everything else is up to the programmer until the programmer executes an RTI, then the status word and the return address are pulled off the stack and restored into their respective registers. It is the programmer's responsibility to save any registers and other data.
Does the program counter get redirected to a special program to be executed? Something like reading the pressed button's ASCII code and saving that into a register or some memory location?
That is correct. In 6502 based computer systems, there are three vectors at the top of memory:
$FFFA - $FFFB : Non maskable interrupt (as above except the I bit in the status register is ignored).
$FFFC - $FFFD : Reset vector used when the CPU detects a reset
$FFFE - $FFFF : Normal interrupt vector.
The above are usually in ROM because the reset vector (at least) has to be there when the CPU powers up. Each address will point to a routine in the machine's operating system for handling interrupts.
Typically, the interrupt routine will first do an indirect jump through a vector stored in RAM. This allows the interrupt routine to be changed when the machine is running.
Then the interrupt routine has to determine the source of the interrupt. For example, on the Commodore PET thew interrupt might originate from the VIA chip or either of the PIA chips and each of those may raise an interrupt for various reasons e.g. one of the PIA chips raises an interrupt when the monitor does a vertical blank i.e. when it finishes scanning the screen and goes back to the top line. During this interrupt, the PET executes a routine to scan the keyboard and another routine to invert the cursor. Another interrupt might occur when the VIA timer hits zero and the programmer can insert an interrupt routine to, for example toggle an output line to generate a square wave for sound.
Some answers to questions in the comments.
program counter goes to address $FFFE to get relocate to the address
No, the program counter is set to whatever is at that address. If you have:
FFFE: 00
FFFF: 10
the program counter will be set to $1000 (6502 is little endian) and that's where the interrupt routine must start. Also, the vector for NMI is at $FFFA. The normal interrupt shares $FFFE with the BRK instruction, not the NMI.
What exactly the reset vector does? Does it reset the cpu?
The reset vector contains the location of the code that runs after the processor has been powered on or when a reset occurs.
What's the difference between NMI and IRQ? Then I also would like to know what's up with masking? Is it the way to set the "I" flag in Processor Status Register high or low?
The 6502 status register contains seven flags. Mostly they are to do with the results of arithmetic instructions e.g. Z is set if the result of an operation is zero, C is set when an operation overflows eight bits and for shifts. The I flag enables and disables the normal interrupt (IRQ). If it's zero, interrupts on IRQ will be respected. If it's 1, interrupts are disabled. You can set it and disable it manually with the SEI and CLI instructions and it is set automatically when an interrupt occurs (this is to prevent an interrupt from interrupting an interrupt).
NMI is a non maskable interrupt. The difference is that it ignores the state of the I flag and uses a different vector.
And finally, what are vectors? Are they synonymous for indirect addresses?
Yes.
Oh, and if you do know, how are interrupt addresses starting from $FFFA stored in ROM instead of RAM in real 6502?
You have to arrange for the address decoding logic to point those address at ROM instead of RAM. In fact, in Commodore systems the whole block from $F000 is ROM containing part of the operating system. The same probably applies to most other 6502 based systems.
There are four types of interrupt on the 6502: RESET, NMI, IRQ and BRK. The first three are hardware interrupts and the last is a software interrupt. The hardware interrupts have physical input voltages on pins on the microprocessor itself. The software interrupt is caused by a BRK instruction.
All interrupts are 'vectored'. That means when they occur the program counter (PC) is immediately loaded from an address stored in memory, and instruction execution continues from that address.
The addresses are stored as two bytes little endian format at the end of the 64k memory space. They are (in hex):
NMI $FFFA/$FFFB
RESET $FFFC/$FFFD
IRQ $FFFE/$FFFF
BRK $FFFE/$FFFF
In the case of NMI, IRQ and BRK, the current PC address is pushed on to the stack, before loading the interrupt address. The processor status register is also pushed on to the stack.
Pushing the registers on to the stack, is enough information to resume execution after the interrupt has been serviced (processed). The A, X and Y registers however, are not pushed automatically on to the stack. Instead the interrupt service routine should do this if necessary - and pull them back off the stack at the end of the service.
Notice that the IRQ and BRK vectors have the same address. In order to distinguish what happened in your service code, you need to examine the Break Bit of the pushed processor status register. The Break Bit is set if the interrupt came from a BRK instruction.
The currently executing instruction will always be completed before servicing the interrupt.
There are many subtleties to interrupt processing. One of which is which type of interrupt wins in the case that they happen (asserted) at the same time. Another is the point at which an interrupt occurs during the instruction cycle. If the interrupt occurs before the penultimate cycle of the instruction, then it will be serviced on the next instruction. If on or after the penultimate cycle, then it will be delayed until one instruction after.
IRQ interrupts can be 'switched off' or ignored by setting a bit in the processor status register, using the SEI instruction.
Typically the interrupt service routine needs to determine the cause of the interrupt (disc drive, keyboard etc.) and to make sure the interrupt condition is cleared and perform any processing (e.g. putting key presses into a buffer). It can normally do this by reading/writing specific memory locations which are mapped to hardware.
There is more information at this link: https://www.pagetable.com/?p=410
Some more information on how interrupts work in a real 8 bit machine (pages 59, 86, 295): BBC Microcomputer Advanced User Guide
And more information on the physical chip package where you can see the NMI, RES(ET) and IRQ pins on the chip package itself (pages 2,3): 6502 Datasheet
I guess you ask for hardware interrupt (IRQ or NMI). At your step 2 in stack (not in stack register) are stored program counter and flags register. Later you call RTI to resume program execution. The program counter is loaded with start address of "something" which is interrupt subroutine or program to process the interrupt. It has to store A, X, Y registers if need to modify their values and restore them before RTI. The IRQ interrupt can be masked (delayed) with I flag and NMI is non-maskable i.e. it is always processed. They have different addresses for subroutine.
An interrupt is the signal to the running processor by means of hardware or software so that the processor will give attention to that action and does the action according to the interrupt message.
There are three kinds of interrupt:
Internal interrupt :- which include the clock cycle interrupt ,in which cpu has to perform the certain action until the particular time and has to go to perform for the another operation.
Software interrupt:- This interrupt is occurred when the problem or errors occurs in the software itself. For example user tries to divide something by zero and error occurs. And there is interrupt.
External interrupt:- External interrupt is caused by IO devices for example mouse and keyboards.
Cpu is designed to handle such type of interrupt and resumes the process before the interrupt occurred.
I am trying to use DMA for my UART Rx and Tx. Till now I had the freeRTOS version of the serial demo working fine. It still works fine. However, now I have incorporated the UART DMA example, from the example projects.
the code is conditionally compiled, so that when a switch _HAS_DMA == 1, only then the DMA engine is configured, ram buffers are configured, and default UART ISRs as required by the FreeRTOS demo are removed.
At this point, whenever I send a serial byte stream, the running project simply gets reset.
I am using MPLAB IDE 8.92, XC16 v1.20, Explorer-16 platform, dspic33fj256gp710 part.
The DMA code included does not use any FreeRTOS API calls.
I have setup the project so that StackOverflow is detected using the FreeRTOS configuration option. But the code does not reach the Stackoverflow hook function. I have also included the U2ErrInterrupt ISR to see if incoming bytes are coming in fine, however even that interrupt is not reached.
Has any one faced this before?
interestingly, the UART DMA Loopback example from Microchip website, which uses the MPLAB C30 compiler, works fine on my board.
any pointers on this one? I could not locate any code examples in the FreeRTOS forum on how to use the DMA for UART, but it is suggested to use this method in production code for efficiency.
Need help here.
Thanks and best regards,
Vishal
OK. I found the culprit. Its me. :)).
When setting up the DMA to receive UART interrupts, one should not enable UART interrupt separately in software. Which is what I was doing. In addition, I had conditionally un-compiled the UART ISRs from my code !!!. So in effect, whenever a byte was received by the UART engine, the processor is getting confused as to who will serve this interrupt, DMA or Application code. I thing the PC would point to the designated UART RX ISR vector location, where the processor would not find anything, and this was causing the reset. Or may be there was a race condition setup between the DMA and the processor to serve this interrupt, which was causing the reset.
Now that I have setup UART so that Interrupts are not enabled separately by application, when DMA is going to serve the UART RX, my code is working fine. I am yet to integrate the whole thing with FreeRTOS deferred interrupt processing using binary semaphores, but I hope I will not see any troubles there.
There is not much documented about this though...neither in Microchip manuals nor in the FreeRTOS examples.
Also, I found that when using DMA with UART, as per the manual, the DMA receives WORDS from the UART RX engine, with lower byte having the data, and upper byte having the status. If the DMA is also used for UART Tx, and is set to transfer WORDS to UART TXREG, the two intelligently manage to send only the lower data byte out. So the receiving party still gets expected bytes. This is also not documented well.
I will try to post my code here for future generations though :)).
How do interrupts work on the Intel 8080? I have searched Google and in Intel's official documentation (197X), and I've found only a little description about this. I need a detailed explanation about it, to emulate this CPU.
The 8080 has an Interrupt line (pin 14). All peripherals are wired to this pin, usually in a "wire-OR" configuration (meaning interrupt request outputs are open-collector and the interrupt pin is pulled high with a resistor). Internally, the processor has an Interrupt Enable bit. Two instructions, EI and DI, set and clear this bit. The entire interrupt system is thus turned on or off, individual interrupts cannot be masked on the "bare" 8080. When a device issues an interrupt, the processor responds with an "Interrupt Acknowledge" (~INTA) signal. This signal has the same timing as the "Memory Read" (~MEMR) signal and it is intended to trigger the peripheral device to place a "Restart" instruction on the data bus. The Interrupt Acknowledge signal is basically an instruction fetch cycle, it occurs only in response to an interrupt.
There are eight Restart instructions, RST 0 - RST 7. RST 7 is opcode "0xFF". The Restart instructions cause the processor to push the program counter on the stack and commence execution at a restart vector location. RST 0 vectors to 0x0000, RST 1 vectors to 0x0008, RST 2 vectors to 0x0010 and so on. Restart 7 vectors to 0x0038. These vector addresses are intended to contain executable code, generally a jump instruction to an interrupt service routine. The Interrupt Service Routine will stack all of the registers it uses, perform the necessary I/O functions, unstack all the registers and return to the main program via the same return instruction that ends subroutines (RET, opcode 0xC9).
Restart instructions are actual opcodes, meaning they will do the same thing if they are fetched from memory during program execution. It was convenient to use Restart 7 as "warm restart" for a keyboard monitor / debugger program because early EPROMs generally contained 0xFF in each blank location. If you were executing blank EPROM, that meant something had gone awry and you probably wanted to go back to the monitor anyway.
Note that RST 0 vectors to the same memory location as RESET, both start executing at 0x0000. But RST 0 leaves a return address on the stack. In a way, RESET can be thought of as the only non-maskable interrupt the 8080 had.
An interrupt signal will also clear the Interrupt bit so an Interrupt Service Routine will need to execute an EI instruction, generally immediately before the RET. Otherwise, the system will respond to one and only one interrupt event.
CP/M reserved the first 256 bytes of memory for system use -- and that interrupt vector map used the first 64 bytes (8 bytes per Restart instruction). On CP/M systems, RAM started at 0x0000 and any ROM lived at the top end of memory. These systems used some form of clever bank switching to switch in an EPROM or something immediately after a RESET to provide a JUMP instruction to the system ROM so it could begin the boot sequence. Systems that had ROM at the low end of the memory map programmed JUMP instructions to vectors located in RAM into the first 64 bytes. These systems had to initialize those RAM vectors at startup.
The 8080 was dependent on external hardware to control its handling of interrupts, so it is impossible to generalize. Look for information on the Intel 8214 or 8259 interrupt controllers.
I finally find it!
I create a variable called bus where interruption opcode goes.
Then, I called a function to handle the interruption:
void i8080::interruption()
{
// only for RST
cycles -= cycles_table[bus];
instruction[bus]();
INT = false;
}
INT is true when needs an interruption.
EI and DI instructions handle INTE.
When INT and INTE is true interruption is executed.
Function pointers to the interrupt handlers are stored in the low memory.
Some 32 or so of the first addresses are hardware interrupts: hardware triggered.
The next 32 or so address are user-triggerable, these are called software interrupts. They are triggered by an INT instruction.
The parameter to INT is the software interrupt vector number, which will be the interrupt called.
You will need to use the IRET instruction to return from interrupts.
It's likely you should also disable interrupts as the first thing you do when entering an interrupt.
For further detail, you should refer to the documentation for your specific processor model, it tends to vary widely.
An interrupt is a way of interrupting the cpu with a notification to handle something else, I am not certain of the Intel 8080 chip, but from my experience, the best way to describe an interrupt is this:
The CS:IP (Code Segment:Instruction Pointer) is on this instruction at memory address 0x0000:0020, as an example for the sake of explaining it using the Intel 8086 instructions, the assembler is gibberish and has no real meaning...the instructions are imaginative
0x0000:001C MOV AH, 07
0x0000:001D CMP AH, 0
0x0000:001E JNZ 0x0020
0x0000:001F MOV BX, 20
0x0000:0020 MOV CH, 10 ; CS:IP is pointing here
0x0000:0021 INT 0x15
When the CS:IP points to the next line and an INTerrupt 15 hexadecimal is issued, this is what happens, the CPU pushes the registers and flags onto the stack and then execute code at 0x1000:0100, which services the INT 15 as an example
0x1000:0100 PUSH AX
0x1000:0101 PUSH BX
0x1000:0102 PUSH CX
0x1000:0103 PUSH DX
0x1000:0104 PUSHF
0x1000:0105 MOV ES, CS
0x1000:0106 INC AX
0x1000:0107 ....
0x1000:014B IRET
Then when the CS:IP hits on the instruction 0x1000:014B, an IRET (Interrupt RETurn), which pops off ALL registers and restore the state, is issued and when it gets executed the CPU's CS:IP points back to here, after the instruction at 0x0000:0021.
0x0000:0022 CMP AX, 0
0x0000:0023 ....
How the CPU knows where to jump to a particular offset is based on the Interrupt Vector table, this interrupt vector table is set by the BIOS at a particular location in the BIOS, it would look like this:
INT BIOS's LOCATION OF INSTRUCTION POINTER
--- --------------------------------------
0 0x3000
1 0x2000
.. ....
15 0x1000 <--- THIS IS HOW THE CPU KNOWS WHERE TO JUMP TO
That table is stored in the BIOS and when the INT 15 is executed, the BIOS, will reroute the CS:IP to the location in the BIOS to execute the service code that handles the interrupt.
Back in the old days, under Turbo C, there was a means to override the interrupt vector table routines with your own interrupt handling functions using the functions setvect and getvect in which the actual interrupt handlers were re-routed to your own code.
I hope I have explained it well enough, ok, it is not Intel 8080, but that is my understanding, and would be sure the concept is the same for that chip as the Intel x86 family of chips came from that.