I am working on a project that uses a Microchip PIC24FJ256GA702. For several days I have been looking for an intermittent bug with some UART code. It uses an interrupt service routine (ISR) to transmit a buffer named char txBuff[] and its length int txLen.
In the main (i.e. not ISR) code I occasionally access the buffer to add bytes or to see how many bytes are left to transmit. To ensure the main code sees a clean copy of the buffer it has a critical section in the form:
Disable interrupts
Read and write 'txBuff' and its length 'txLen'
Enable interrupt
There are several ways to disable an ISR on this PIC. Including: Use the DISI instruction, or clear the GIE bit, or alter interrupt priority levels.
But I have chosen to clear the interrupt to enable bit of the specific interrupt bit in IECx. In this case, it is IEC1bits.U2TXIE because I am using UART2. I do this because it is easy, it does not disable unrelated interrupts, and it has always worked well in my other projects. So in C, the critical section is:
IEC1bits.U2TXIE = 0;
copyOfTxLen = txLen;
...
IEC1bits.U2TXIE = 1;
And the disassembly listing starts:
BCLR 0x9B, #7 // disable interrupt
MOV txLen, W1 // read txLen
...
The problem: I am now fairly certain that the ISR occasionally gets called during read txLen such that copyOfTxLen ends up being the value of txLen before the ISR was called, but the remainder of the critical section sees variables in the state they are after the ISR is called.
The question: Is my code faulty? If so how can the fault be avoided?
Some background:
UART is running at a high speed (250 kBaud) with a PIC24 clock of 8 MHz, a byte every 160 instructions, so the ISR is quite busy and this I think increases the chances of hitting a race condition compared to my previous projects.
I see dsPIC33/PIC24 Family Reference Manual / Interrupts, section 2.3.1 Note 2 says "There is one cycle delay between clearing the GIE bit and the interrupts being disabled." Now this refers to GIE, not U2TXIE and I am not sure whether it would cause a problem like I am seeing because the MOV instruction is a 2 cycle instruction. But nevertheless, it looks like a cunning way to trap unwary programmers.
If I try to debug it or add any debug code the already very intermittent problem goes away (feels like a race condition). I don't want to just add some NOPs without knowing exactly why I should because race conditions always bite you back unless you actually fix them.
The code in the question is faulty.
To fix it, wrap expressions intended to disable interrupts such as IEC1bits.U2TXIE = 0 in a macro like this:
__write_to_IEC(IEC1bits.U2TXIE = 0);
XC16 compiler release notes says:
__write_to_IEC(X) - a macro that wraps the expression X with an appropriate number of nop instructions to ensure that the write to the IEC register has taken effect before the program executes. For example, __write_to_IEC(IEC0bits.T1IE = 0); will not progress until the device has disabled the interrupt enable bit for T1 (Timer1).
In p24FJ256GA702.h that is provided with the XC16 compiler, the macro is defined:
#define __write_to_IEC(X) \
( (void)(X), \
__builtin_nop() \
)
So despite not being mentioned anywhere in the PIC24FJ256GA705 FAMILY datasheet or in the separate Interrupts document, it seems clear that a single NOP is required here on this PIC.
Update: I find exactly the same issue on the dsPIC33EP256MU806 (dsPIC33E family) which requires two NOP instructions.
Related
I am evaluating the ATtiny806 running at 20MHz to build a cycle-accurate Intel 4004 microprocessor emulator. (I know it will be a bit too slow, but AVRs have a huge community.)
I need to synchronize to the external, two-phase non-overlapping clocks. These are not fast clocks (the original 4004 ran at 750kHz)
but if I spin-wait for every clock edge, I risk wasting most of my time budget.
The TinyAVR 0-series has a very nice pin-change interrupt facility that can be configured to trigger only on rising edges.
But, an interrupt routine round-trip is 8 cycles (3 in, 5 out).
My question is:
Can I leverage the pin-change sensing mechanism while never visiting an ISR?
(Other processor families let you poll for interruptible conditions without enabling interrupts from that peripheral). Can polling be done with a tight skip-on-bit/jump-back loop, followed by a set-bit instruction?
Straightforward way
You can always just poll on the level of the GPIO pin using the single cycle skip if bit set/clear instruction on the appropriate PORT register and bit.
But as you mention, polling does burn cycles so I'm not sure exactly what you want here - either a poll (that burns cycles but has low latency) or an interrupt (that has higher latency but allows processing to continue until the condition is true).
Note that if things get really tight and you are looking for, say, power savings by sleeping between clock signal transitions then you can do tricks like having an ISR that nevers returns (saving the IRET cycles) but that requires some careful coding probably with something like a state machine.
INTFLAG way
Alternately, if you want to use the internal pin state machine logic and you can live without interrupts, then you can use the INTFLAGS flags to check for the pin change configured in the ISC bits of the PINxCTRL register. As long as global interrupts are not enabled in SREG then you can spin poll on the appropriate INTFLAG bit to check/wait for the desired condition, and then write a 1 to that bit to clear the flag.
Note that if you want to make this fast, you will probably want to map the appropriate PORT to a VPORT since the VPORT registers are in I/O Memory. This lets you use SBIS to test the INTFLAG bit a single cycle and SBI to clear the bit in a single cycle (these instructions only work on IO memory and the normal PORT registers are not in IO Memory).
Finally one more complication, if you need to leave the interrupts on when doing this, it is probably possible by hacking the interrupt priority registers. You'd set the pin change to be on level 0, and then make sure the interrupts you care about are level 1 or higher, and then trick the interrupt controller into thinking that there is already a level 0 running so these interrupts do not actually fire. There are also other restrictions to this strategy so avoid it if at all possible.
Programmable logic way
If you want to get really esoteric, it is likely possible that you could route the input value of a pin to a configurable custom logic LUT in the chip and then route the output of that module to a bit that you test using a 1-cycle bit test (maybe an unused IO Pin). To do this, you'd feedback the output of the LUT back into one of its inputs and then use the LUT to create a strobe on the edge you are looking for. This is very complex, and also since the strobe has no acknowledgement that if the signal changes when you are not looking for it (in a spin check) then it will be lost and you will have to wait for the next edge (probably fatal in your application).
I want to understand what exactly an interrupt is for my 6502 work-alike processor project in Logisim.
I know that an interrupt does the following steps:
Stops the current program from processing
Saves all unfinished data into the stack
Does "SOMETHING"
Loads back the unfinished data and let's the program keep running normally.
My question is: what happens during that "SOMETHING" step? Does the program counter get redirected to a special program to be executed? Something like reading the pressed button's ASCII code and saving that into a register or some memory location? If so, where is that special program usually stored in the memory? And can you make such a CPU that will handle different kinds of interrupts? Maybe if you press the button "a" then it's ASCII will be stored in A register, but if you press the button "b" then it will be stored in X register?
Any help is greatly appreciated.
Edit: Thanks to everybody for answers. I learned a lot and now can proceed with my project.
My question is: what happens during that "SOMETHING" step? Does the program counter get redirected to a special program to be executed?
What happens with a 6502 maskable interrupt is this:
the interrupt is raised (by this I mean the interrupt pin on the chip is forced low.
when it's time to execute a new instruction, the 6502 checks if the interrupt pin is low and the interrupt mask in the status register is not set. If either is not thew case i.e. if the interrupt pin is high or the interrupt mask is high, the CPU just carries on.
Assuming an interrupt is required, the CPU saves the PC on the stack
The CPU then saves the status register on the stack but with the B bit set to 0. The B bit is the "break" bit. It would be set to 1 for a BRK instruction and that is the only way to tell the difference between a hardware interrupt and a BRK instruction.
The CPU then fetches the address at locations $FFFE and $FFFF and stuffs it into the PC, so execution begins again at that address.
That's all it does. Everything else is up to the programmer until the programmer executes an RTI, then the status word and the return address are pulled off the stack and restored into their respective registers. It is the programmer's responsibility to save any registers and other data.
Does the program counter get redirected to a special program to be executed? Something like reading the pressed button's ASCII code and saving that into a register or some memory location?
That is correct. In 6502 based computer systems, there are three vectors at the top of memory:
$FFFA - $FFFB : Non maskable interrupt (as above except the I bit in the status register is ignored).
$FFFC - $FFFD : Reset vector used when the CPU detects a reset
$FFFE - $FFFF : Normal interrupt vector.
The above are usually in ROM because the reset vector (at least) has to be there when the CPU powers up. Each address will point to a routine in the machine's operating system for handling interrupts.
Typically, the interrupt routine will first do an indirect jump through a vector stored in RAM. This allows the interrupt routine to be changed when the machine is running.
Then the interrupt routine has to determine the source of the interrupt. For example, on the Commodore PET thew interrupt might originate from the VIA chip or either of the PIA chips and each of those may raise an interrupt for various reasons e.g. one of the PIA chips raises an interrupt when the monitor does a vertical blank i.e. when it finishes scanning the screen and goes back to the top line. During this interrupt, the PET executes a routine to scan the keyboard and another routine to invert the cursor. Another interrupt might occur when the VIA timer hits zero and the programmer can insert an interrupt routine to, for example toggle an output line to generate a square wave for sound.
Some answers to questions in the comments.
program counter goes to address $FFFE to get relocate to the address
No, the program counter is set to whatever is at that address. If you have:
FFFE: 00
FFFF: 10
the program counter will be set to $1000 (6502 is little endian) and that's where the interrupt routine must start. Also, the vector for NMI is at $FFFA. The normal interrupt shares $FFFE with the BRK instruction, not the NMI.
What exactly the reset vector does? Does it reset the cpu?
The reset vector contains the location of the code that runs after the processor has been powered on or when a reset occurs.
What's the difference between NMI and IRQ? Then I also would like to know what's up with masking? Is it the way to set the "I" flag in Processor Status Register high or low?
The 6502 status register contains seven flags. Mostly they are to do with the results of arithmetic instructions e.g. Z is set if the result of an operation is zero, C is set when an operation overflows eight bits and for shifts. The I flag enables and disables the normal interrupt (IRQ). If it's zero, interrupts on IRQ will be respected. If it's 1, interrupts are disabled. You can set it and disable it manually with the SEI and CLI instructions and it is set automatically when an interrupt occurs (this is to prevent an interrupt from interrupting an interrupt).
NMI is a non maskable interrupt. The difference is that it ignores the state of the I flag and uses a different vector.
And finally, what are vectors? Are they synonymous for indirect addresses?
Yes.
Oh, and if you do know, how are interrupt addresses starting from $FFFA stored in ROM instead of RAM in real 6502?
You have to arrange for the address decoding logic to point those address at ROM instead of RAM. In fact, in Commodore systems the whole block from $F000 is ROM containing part of the operating system. The same probably applies to most other 6502 based systems.
There are four types of interrupt on the 6502: RESET, NMI, IRQ and BRK. The first three are hardware interrupts and the last is a software interrupt. The hardware interrupts have physical input voltages on pins on the microprocessor itself. The software interrupt is caused by a BRK instruction.
All interrupts are 'vectored'. That means when they occur the program counter (PC) is immediately loaded from an address stored in memory, and instruction execution continues from that address.
The addresses are stored as two bytes little endian format at the end of the 64k memory space. They are (in hex):
NMI $FFFA/$FFFB
RESET $FFFC/$FFFD
IRQ $FFFE/$FFFF
BRK $FFFE/$FFFF
In the case of NMI, IRQ and BRK, the current PC address is pushed on to the stack, before loading the interrupt address. The processor status register is also pushed on to the stack.
Pushing the registers on to the stack, is enough information to resume execution after the interrupt has been serviced (processed). The A, X and Y registers however, are not pushed automatically on to the stack. Instead the interrupt service routine should do this if necessary - and pull them back off the stack at the end of the service.
Notice that the IRQ and BRK vectors have the same address. In order to distinguish what happened in your service code, you need to examine the Break Bit of the pushed processor status register. The Break Bit is set if the interrupt came from a BRK instruction.
The currently executing instruction will always be completed before servicing the interrupt.
There are many subtleties to interrupt processing. One of which is which type of interrupt wins in the case that they happen (asserted) at the same time. Another is the point at which an interrupt occurs during the instruction cycle. If the interrupt occurs before the penultimate cycle of the instruction, then it will be serviced on the next instruction. If on or after the penultimate cycle, then it will be delayed until one instruction after.
IRQ interrupts can be 'switched off' or ignored by setting a bit in the processor status register, using the SEI instruction.
Typically the interrupt service routine needs to determine the cause of the interrupt (disc drive, keyboard etc.) and to make sure the interrupt condition is cleared and perform any processing (e.g. putting key presses into a buffer). It can normally do this by reading/writing specific memory locations which are mapped to hardware.
There is more information at this link: https://www.pagetable.com/?p=410
Some more information on how interrupts work in a real 8 bit machine (pages 59, 86, 295): BBC Microcomputer Advanced User Guide
And more information on the physical chip package where you can see the NMI, RES(ET) and IRQ pins on the chip package itself (pages 2,3): 6502 Datasheet
I guess you ask for hardware interrupt (IRQ or NMI). At your step 2 in stack (not in stack register) are stored program counter and flags register. Later you call RTI to resume program execution. The program counter is loaded with start address of "something" which is interrupt subroutine or program to process the interrupt. It has to store A, X, Y registers if need to modify their values and restore them before RTI. The IRQ interrupt can be masked (delayed) with I flag and NMI is non-maskable i.e. it is always processed. They have different addresses for subroutine.
An interrupt is the signal to the running processor by means of hardware or software so that the processor will give attention to that action and does the action according to the interrupt message.
There are three kinds of interrupt:
Internal interrupt :- which include the clock cycle interrupt ,in which cpu has to perform the certain action until the particular time and has to go to perform for the another operation.
Software interrupt:- This interrupt is occurred when the problem or errors occurs in the software itself. For example user tries to divide something by zero and error occurs. And there is interrupt.
External interrupt:- External interrupt is caused by IO devices for example mouse and keyboards.
Cpu is designed to handle such type of interrupt and resumes the process before the interrupt occurred.
I've recently been playing around with the ATtiny85 as a means of prototyping some simple electronics in a very small package. I'm having a spot of trouble with this since the language used for many of its functions is very different (and a lot less intuitive!) than that found in a standard Arduino sketch. I'm having some difficulty finding a decent reference for the hardware-specific functions too.
Primarily, what I'd like to do is listen for both a pin change and a timer at the same time. A change of state in the pin will reset the timer, but at the same time the code needs to respond to the timer itself if it ends before the pin's state changes.
Now, from the tutorials I've managed to find it seems that both pin change and timer interrupts are funnelled through the same function - ISR(). What I'd like to know is:
Is it possible to have both a pin and a timer interrupt going at the same time?
Assuming they both call the same function, how do you tell them apart?
ISR() is not a function, it's a construct (macro) that is used to generate the stub for an ISR as well as inject the ISR into the vector table. The vector name passed to the macro determines which interrupt it services.
ISR(INT0_vect)
{
// Handle external interrupt 0 (PB2)
...
};
ISR(TIM0_OVF_vect)
{
// Handle timer 0 overflow
...
};
According to the datasheet ATtiny85 doesn't have the same interrupt vector for PCINT0 and TIMER1 COMPA/OVF/COMPB, so you can define different ISR handlers for each one of them.
If you're using the same handler for more interrupts, it might be impossible to differentiate between them, as interrupt flags are usually cleared by hardware on ISR vector execution.
As an addition to the accepted answer:
Is it possible to have both a pin and a timer interrupt going at the same time?
The interrupt can occur at exactly the same time on the hardware level and the corresponding interrupt flags would be set accordingly. The flags indicate that the ISR for the respective interrupt should be executed. But the actual ISRs are (more or less obviously) not executed at the same time / in parallel. Which ISR is executed first (in case multiple interrupts are pending) depends on the interrupt priority, which is specified in the interrupt vector table from the data sheet.
I've been using FreeRTOS for a while now on my project and I have to say I love it.
Tough i'm facing a bug which is killing me.
My code contains a large amount of code, about 80 files and use several microchip stack and run about 10 tasks.
The problem is that about 2-3 times a day, the chip will go into an address error interrupt and I haven't really been able to find out what was the source of the problem.
I believe that this error occur at the moment of an interrupt because I've been able to reduce the occurrence of the crash be using DMA transfer in one UART which reduce the interruption by a factor of 80.
I've been reading a lot of example and and forum thread about it, but it seem to always have different approach about how to handle interrupt, whether use or not the taskYield, specifically on the PIC24EP.
An other point is the interrupt nesting. It is currently enabled and I haven't tested disabled. I've seen some thread about it without really an answer whether it should be kept enable or not.
This is the way I'm currently handling my DMA interrupt. I use a queue instead of a semaphore for previous code compatibility bot it does the same job.
void attribute((interrupt, auto_psv)) _DMA1Interrupt(void)
{
char val = 55;
IFS0bits.DMA1IF = 0; // Clear the DMA1 Interrupt Flag
xQueueSendFromISR( RS485_Queue, &val, NULL );
}
Some example from the RTOS library shows no task yield after the interrupt.
Shall I add the yield into each interrupt ?
void attribute((interrupt, auto_psv)) _DMA1Interrupt(void)
{
char val = 55;
portBASE_TYPE xTaskWoken;
IFS0bits.DMA1IF = 0; // Clear the DMA1 Interrupt Flag
xQueueSendFromISR( RS485_Queue, &val, &xTaskWoken);
if( xTaskWoken )
taskYIELD();
}
Shall I disable Nested interrupt ?
Shall I add more stack space in my tasks ?
I'm not a specialist of processor stack and the way RTOS works at the stack level.
If two interrupt happens at the same time then the current running task would need bigger stack size ? Might my problem be related of having two interrupt (or more) at the same time and having nested interrupt use more task space than it is actually defined ?
Thanks!
Edit:
Problem solved, and seem actually I'm not the only one to have gone into that problem as unfortunately it doesn't seem to appear on the documentation or the code example but only in some webpage of freertos that needs to be dug up.
The interrupt using FreeRTOS Kernel should not have a priority higher than the configKERNEL_INTERRUPT_PRIORITY which has default value of 1, while interrupt on microchip have default value of 3. This means using interrupt at their default priority level will lead to sporadic trap error.
I'd have some code that needs to be run as the result of a particular interrupt going off.
I don't want to execute it in the context of the interrupt itself but I also don't want it to execute in thread mode.
I would like to run it at a priority that's lower than the high level interrupt that precipitated its running but also a priority that higher than thread level (and some other interrupts as well).
I think I need to use one of the other interrupt handlers.
Which ones are the best to use and what the best way to invoke them?
At the moment I'm planning on just using the interrupt handlers for some peripherals that I'm not using and invoking them by setting bits directly through the NVIC but I was hoping there's a better, more official way.
Thanks,
ARM Cortex supports a very special kind of exception called PendSV. It seems that you could use this exception exactly to do your work. Virtually all preemptive RTOSes for ARM Cortex use PendSV to implement the context switch.
To make it work, you need to prioritize PendSV low (write 0xFF to the PRI_14 register in the NVIC). You should also prioritize all IRQs above the PendSV (write lower numbers in the respective priority registers in the NVIC). When you are ready to process the whole message, trigger the PendSV from the high-priority ISR:
*((uint32_t volatile *)0xE000ED04) = 0x10000000; // trigger PendSV
The ARM Cortex CPU will then finish your ISR and all other ISRs that possibly were preempted by it, and eventually it will tail-chain to the PendSV exception. This is where your code for parsing the message should be.
Please note that PendSV could be preempted by other ISRs. This is all fine, but you need to obviously remember to protect all shared resources by a critical section of code (briefly disabling and enabling interrupts). In ARM Cortex, you disable interrupts by executing __asm("cpsid i") and you enable interrupts by __asm("cpsie i"). (Most C compilers provide built-in intrinsic functions or macros for this purpose.)
Are you using an RTOS? Generally this type of thing would be handled by having a high priority thread that gets signaled to do some work by the interrupt.
If you're not using an RTOS, you only have a few tasks, and the work being kicked off by the interrupt isn't too resource intensive, it might be simplest having your high priority work done in the context of the interrupt handler. If those conditions don't hold, then implementing what you're talking about would be the start of a basic multitasking OS itself. That can be an interesting project in its own right, but if you're looking to just get work done, you might want to consider a simple RTOS.
Since you mentioned some specifics about the work you're doing, here's an overview of how I've handled a similar problem in the past:
For handling received data over a UART one method that I've used when dealing with a simpler system that doesn't have full support for tasking (ie., the tasks are round-robined i na simple while loop) is to have a shared queue for data that's received from the UART. When a UART interrupt fires, the data is read from the UART's RDR (Receive Data Register) and placed in the queue. The trick to deal with this in such a way that the queue pointers aren't corrupted is to carefully make the queue pointers volatile, and make certain that only the interrupt handler modifies the tail pointer and that only the 'foreground' task that's reading data off the queue modified the head pointer. A high-level overview:
producer (the UART interrupt handler):
read queue.head and queue.tail into locals;
increment the local tail pointer (not the actual queue.tail pointer). Wrap it to the start of the queue buffer if you've incremented past the end of the queue's buffer.
compare local.tail and local.head - if they're equal, the queue is full, and you'll have to do whatever error handing is appropriate.
otherwise you can write the new data to where local.tail points
only now can you set queue.tail == local.tail
return from the interrupt (or handle other UART related tasks, if appropriate, like reading from a transmit queue)
consumer (the foreground 'task')
read queue.head and queue.tail into locals;
if local.head == local.tail the queue is empty; return to let the next task do some work
read the byte pointed to by local.head
increment local.head and wrap it if necessary;
set queue.head = local.head
goto step 1
Make sure that queue.head and queue.tail are volatile (or write these bits in assembly) to make sure there are no sequencing issues.
Now just make sure that your UART received data queue is large enough that it'll hold all the bytes that could be received before the foreground task gets a chance to run. The foreground task needs to pull the data off the queue into it's own buffers to build up the messages to give to the 'message processor' task.
What you are asking for is pretty straightforward on the Cortex-M3. You need to enable the STIR register so you can trigger the low priority ISR with software. When the high-priority ISR gets done with the critical stuff, it just triggers the low priority interrupt and exits. The NVIC will then tail-chain to the low-priority handler, if there is nothing more important going on.
The "more official way" or rather the conventional method is to use a priority based preemptive multi-tasking scheduler and the 'deferred interrupt handler' pattern.
Check your processor documentation. Some processors will interrupt if you write the bit that you normally have to clear inside the interrupt. I am presently using a SiLabs c8051F344 and in the spec sheet section 9.3.1:
"Software can simulate an interrupt by setting any interrupt-pending flag to logic 1. If interrupts are enabled for the flag, an interrupt request will be generated and the CPU will vector to the ISR address associated with the interrupt-pending flag."