Interrupts can be enabled for a specific pin(s) on a digital I/O port, correct? How would the ISR determine which pin caused the interrupt?
Because the vector table has only one slot for the Port1 ISR. So the same ISR function gets called no matter which input pin on Port1 needs attention unless I'm wrong...
As other people have suggested in comments this can be MCU dependent, but for ARM(The core behind MSP432) generally the answer is it doesnt know, it looks for it.
ARM has a vectored interrupt system, which means that every source has its own vector of interrupt, so CPU can easily find out which source is triggering thr interrupt. so far so good.
but then it happens that a device can trigger multiple interrupts, like GPIO as you said, in this case, CPU knows that which port has triggered interrupt so fires it's ISR but then it is ISR responsibility to poll device registers to figure out exact interrupt source, there are many of this peripherals with multiple interrupt, timers, DMAs just to name a few.
This is exactly why normally peripherals have an interrupt enable bit, that lets them trigger interrupts, but they also have bit masks that controls what exactly can trigger that interrupt internally,
Also have a look at this link for an in action example, specially at their ISR that does exactly the same as described above
In a typical MCU, there are hundreds, or at a stretch even thousands of potential interrupt sources. Depending on the application, only some will be important, and even fewer will be genuinely timing critical.
For a GPIO port, you typically enable only the pins which are interesting to generate an interrupt. If you can arrange only one pin of a port to be generating the interrupt, the job is done, your handler for that port can do the work, safely knowing that it will only be called when the right pin is active.
When you care about the cause within a single peripheral, and don't have the luxury of individually vectored handlers, you need to fall back on the 'non vectored' approach, and check the status registers before working out which eventual handler function needs to be called.
Interestingly, you can't work out which pin caused the interrupt - all you can see is which pins are still active once you get round to polling the status register. If you care about the phasing between two pulses, you may not be able to achieve this discrimination within a single GPIO unless there is dedicated hardware support. Even multiple exception vectors wouldn't help, unless you can be sure that the first exception is always taken before the second pin could become set.
Related
I am evaluating the ATtiny806 running at 20MHz to build a cycle-accurate Intel 4004 microprocessor emulator. (I know it will be a bit too slow, but AVRs have a huge community.)
I need to synchronize to the external, two-phase non-overlapping clocks. These are not fast clocks (the original 4004 ran at 750kHz)
but if I spin-wait for every clock edge, I risk wasting most of my time budget.
The TinyAVR 0-series has a very nice pin-change interrupt facility that can be configured to trigger only on rising edges.
But, an interrupt routine round-trip is 8 cycles (3 in, 5 out).
My question is:
Can I leverage the pin-change sensing mechanism while never visiting an ISR?
(Other processor families let you poll for interruptible conditions without enabling interrupts from that peripheral). Can polling be done with a tight skip-on-bit/jump-back loop, followed by a set-bit instruction?
Straightforward way
You can always just poll on the level of the GPIO pin using the single cycle skip if bit set/clear instruction on the appropriate PORT register and bit.
But as you mention, polling does burn cycles so I'm not sure exactly what you want here - either a poll (that burns cycles but has low latency) or an interrupt (that has higher latency but allows processing to continue until the condition is true).
Note that if things get really tight and you are looking for, say, power savings by sleeping between clock signal transitions then you can do tricks like having an ISR that nevers returns (saving the IRET cycles) but that requires some careful coding probably with something like a state machine.
INTFLAG way
Alternately, if you want to use the internal pin state machine logic and you can live without interrupts, then you can use the INTFLAGS flags to check for the pin change configured in the ISC bits of the PINxCTRL register. As long as global interrupts are not enabled in SREG then you can spin poll on the appropriate INTFLAG bit to check/wait for the desired condition, and then write a 1 to that bit to clear the flag.
Note that if you want to make this fast, you will probably want to map the appropriate PORT to a VPORT since the VPORT registers are in I/O Memory. This lets you use SBIS to test the INTFLAG bit a single cycle and SBI to clear the bit in a single cycle (these instructions only work on IO memory and the normal PORT registers are not in IO Memory).
Finally one more complication, if you need to leave the interrupts on when doing this, it is probably possible by hacking the interrupt priority registers. You'd set the pin change to be on level 0, and then make sure the interrupts you care about are level 1 or higher, and then trick the interrupt controller into thinking that there is already a level 0 running so these interrupts do not actually fire. There are also other restrictions to this strategy so avoid it if at all possible.
Programmable logic way
If you want to get really esoteric, it is likely possible that you could route the input value of a pin to a configurable custom logic LUT in the chip and then route the output of that module to a bit that you test using a 1-cycle bit test (maybe an unused IO Pin). To do this, you'd feedback the output of the LUT back into one of its inputs and then use the LUT to create a strobe on the edge you are looking for. This is very complex, and also since the strobe has no acknowledgement that if the signal changes when you are not looking for it (in a spin check) then it will be lost and you will have to wait for the next edge (probably fatal in your application).
I know how an interrupt routine is executed in 8086. The thing that isn't clear to me is how different types of interrupts (i.e hardware, software and exception) uses the control flags (Interrupt Flag and Trap Flag) in their execution.
And other thing is what is Non-mask-able Interrupts and its use?
So please help me with this, Thanks.
An interrupt handler doesn't "do" anything with the IF and TF flags. They are cleared so the interrupt handler can do its job properly and safely. You need to understand what those flags do, then it becomes obvious why they are cleared during an interrupt.
When the Interrupt Flag or IF is set, the processor will allow external hardware signals (usually from a Programmable Interrupt Controller or PIC) to trigger interrupts. When it's cleared, hardware interrupt signals are ignored.
(The NMI or Non-Maskable Interrupt is an exception, a special case intended for "emergency-type" or "real-time" events, and it will trigger even if the IF is cleared.)
The Trap Flag or TF is used by debuggers. When the flag is set, the processor will execute exactly one instruction, then trigger an INT 1. A debugger can use this to single-step machine code without having to temporarily modify it (e.g. to temporarily insert an INT 3 instruction), which is not always even possible (e.g. single-stepping code stored in ROM).
Now why are both flags cleared during interrupts?
The IF is cleared because Intel didn't want to impose the restriction that interrupt handlers be reentrant. Reentrant code is code that can be safely suspended at any time, and called again from the top. If you allow interrupts while an interrupt handler is running, it is quite possible for a second interrupt to trigger while in the middle of handling the first one, which would cause the handle to re-enter. Note that software interrupt handlers (like the DOS interrupt handler 21h) typically don't have this concern because they are not called by asynchronous hardware signals; therefore, just about the first thing they do is execute STI to re-enable interrupts.
The situation with TF is very similar but a bit trickier to understand. I don't have experience writing an x86 debugger, so I don't know the ins-and outs. The short version is that the TF is cleared during interrupts to avoid chaos. What follows is a speculative excercise of mine.
First of all, it should be obvious that at least the single-step interrupt (type-1 or INT 1 if you will) MUST clear the flag, otherwise the debugger's single-step handler itself would trigger single-step interrupts or not run at all. Second, let's imagine that the TF is not cleared for every interrupt: if a hardware interrupt triggers while the debugger is trying to use the TF, the interrupt handler itself might be the one triggering the single-step interrupt, instead of the code being debugged. Worse, now the interrupts are suspended (see IF above) and not only are you looking at the wrong code (or thoroughly confused the debugger), but your keyboard doesn't work anymore. (As I said, I'm speculating: I have no idea what happens if IF is cleared but TF is set).
Asynchronous hardware interrupts need to be handled without "bothering" the current running program, that is, they need to execute without the program being aware of them. That includes "not bothering" a single-stepping debugger.
I should first share all what I know - and that is complete chaos. There are several different questions on the topic, so please don't get irritated :).
1) To find an ISR, CPU is provided with a interrupt number. In x86 machines (286/386 and above) there is a IVT with ISRs in it; each entry of 4 bytes in size. So we need to multiply interrupt number by 4 to find the ISR. So first bunch of questions is - I am completely confused in mechanism of CPU receiving the interrupt. To raise an interrupt, firstly device shall probe for IRQ - then what ? The interrupt number travels "on IRQ" towards CPU? I also read something like device putting ISR address on data bus ; whats that then ? What is the concept of devices overriding the ISR. Can somebody tell me few example devices where CPU polls for interrupts? And where does it finds ISR for them ?
2) If two devices share an IRQ (which is very much possible), how does CPU differs amongst them ? What if both devices raise an interrupt of same priority simultaneously. I got to know there will be masking of same type and low priority interrupts - but how this communication happens between CPU and device controller? I studied the role of PIC and APIC for this problem, but could not understand.
Thanks for reading.
Thank you very much for answering.
CPUs don't poll for interrupts, at least not in a software sense. With respect to software, interrupts are asynchronous events.
What happens is that hardware within the CPU recognizes the interrupt request, which is an electrical input on an interrupt line, and in response, sets aside the normal execution of events to respond to the interrupt. In most modern CPUs, what happens next is determined by a hardware handshake particular to the type of CPU, but most of them receive a number of some kind from the interrupting device. That number can be 8 bits or 32 or whatever, depending on the design of the CPU. The CPU then uses this interrupt number to index into the interrupt vector table, to find an address to begin execution of the interrupt service routine. Once that address is determined, (and the current execution context is safely saved to the stack) the CPU begins executing the ISR.
When two devices share an interrupt request line, they can cause different ISRs to run by returning a different interrupt number during that handshaking process. If you have enough vector numbers available, each interrupting device can use its own interrupt vector.
But two devices can even share an interrupt request line and an interrupt vector, provided that the shared ISR is clever enough to go back to all the possible sources of the given interrupt, and check status registers to see which device requested service.
A little more detail
Suppose you have a system composed of a CPU, and interrupt controller, and an interrupting device. In the old days, these would have been separate physical devices but now all three might even reside in the same chip, but all the signals are still there inside the ceramic case. I'm going to use a powerPC (PPC) CPU with an integrated interrupt controller, connected to a device on a PCI bus, as an example that should serve nicely.
Let's say the device is a serial port that's transmitting some data. A typical serial port driver will load bunch of data into the device's FIFO, and the CPU can do regular work while the device does its thing. Typically these devices can be configured to generate an interrupt request when the device is running low on data to transmit, so that the device driver can come back and feed more into it.
The hardware logic in the device will expect a PCI bus interrupt acknowledge, at which point, a couple of things can happen. Some devices use 'autovectoring', which means that they rely on the interrupt controller to see to it that the correct service routine gets selected. Others will have a register, which the device driver will pre-program, that contains an interrupt vector that the device will place on the data bus in response to the interrupt acknowledge, for the interrupt controller to pick up.
A PCI bus has only four interrupt request lines, so our serial device will have to assert one of those. (It doesn't matter which at the moment, it's usually somewhat slot dependent..) Next in line is the interrupt controller (e.g. PIC/APIC), that will decide whether to acknowledge the interrupt based on mask bits that have been set in its own registers. Assuming it acknowledges the interrupt, it either then obtains the vector from the interrupting device (via the data bus lines), or may if so programmed use a 'canned' value provided by the APIC's own device driver. So far, the CPU has been blissfully unaware of all these goings-on, but that's about to change.
Now it's time for the interrupt controller to get the attention of the CPU core. The CPU will have its own interrupt mask bit(s) that may cause it to just ignore the request from the PIC. Assuming that the CPU is ready to take interrupts, it's now time for the real action to start. The current instruction usually has to be retired before the ISR can begin, so with pipelined processors this is a little complicated, but suffice it to say that at some point in the instruction stream, the processor context is saved off to the stack and the hardware-determined ISR takes over.
Some CPU cores have multiple request lines, and can start the process of narrowing down which ISR runs via hardware logic that jumps the CPU instruction pointer to one of a handful of top level handlers. The old 68K, and possibly others did it that way. The powerPC (and I believe, the x86) have a single interrupt request input. The x86 itself behaves a bit like a PIC, and can obtain a vector from the external PIC(s), but the powerPC just jumps to a fixed address, 0x00000500.
In the PPC, the code at 0x0500 is probably just going to immediately jump out to somewhere in memory where there's room enough for some serious decision making code, but it's still the interrupt service routine. That routine will first go to the PIC and obtain the vector, and also ask the PIC to stop asserting the interrupt request into the CPU core. Once the vector is known, the top level ISR can case out to a more specific handler that will service all the devices known to be using that vector. The vector specific handler then walks down the list of devices assigned to that vector, checking interrupt status bits in those devices, to see which ones need service.
When a device, like the hypothetical serial port, is found wanting service, the ISR for that device takes appropriate actions, for example, loading the next FIFO's worth of data out of an operating system buffer into the port's transmit FIFO. Some devices will automatically drop their interrupt request in response to being accessed, for example, writing a byte into the transmit FIFO might cause the serial port device to de-assert the request line. Other devices will require a special control register bit to be toggled, set, cleared, what-have-you, in order to drop the request. There are zillions of different I/O devices and no two of them ever seem to do it the same way, so it's hard to generalize, but that's usually the way of it.
Now, obviously there's more to say - what about interrupt priorities? what happens in a multi-core processor? What about nested interrupt controllers? But I've burned enough space on the server. Hope any of this helps.
I Came over this Question like after 3 years.. Hope I Can help ;)
The Intel 8259A or simply the "PIC" has 8 pins ,IRQ0-IRQ7, every pin connects to a single device..
Lets suppose that u pressed a button on the keyboard.. the voltage of the IRQ1 pin, which is connected to the KBD, is High.. so after the CPU gets interrupted, acknowledge the Interrupt bla bla bla... the PIC does simply add 8 to the number of the IRQ line so IRQ1 means 1+8 which means 9
SO the CPU sets its CS and IP on the 9th entry in the vector table.. and because the IVT is an array of longs it just multiply the number of cells by 4 ;)
CPU.CS=IVT[9].CS
CPU.IP=IVT[9].IP
the ESR deals with the device through the I/O ports ;)
Sorry for my bad english .. am an Arab though :)
I'd have some code that needs to be run as the result of a particular interrupt going off.
I don't want to execute it in the context of the interrupt itself but I also don't want it to execute in thread mode.
I would like to run it at a priority that's lower than the high level interrupt that precipitated its running but also a priority that higher than thread level (and some other interrupts as well).
I think I need to use one of the other interrupt handlers.
Which ones are the best to use and what the best way to invoke them?
At the moment I'm planning on just using the interrupt handlers for some peripherals that I'm not using and invoking them by setting bits directly through the NVIC but I was hoping there's a better, more official way.
Thanks,
ARM Cortex supports a very special kind of exception called PendSV. It seems that you could use this exception exactly to do your work. Virtually all preemptive RTOSes for ARM Cortex use PendSV to implement the context switch.
To make it work, you need to prioritize PendSV low (write 0xFF to the PRI_14 register in the NVIC). You should also prioritize all IRQs above the PendSV (write lower numbers in the respective priority registers in the NVIC). When you are ready to process the whole message, trigger the PendSV from the high-priority ISR:
*((uint32_t volatile *)0xE000ED04) = 0x10000000; // trigger PendSV
The ARM Cortex CPU will then finish your ISR and all other ISRs that possibly were preempted by it, and eventually it will tail-chain to the PendSV exception. This is where your code for parsing the message should be.
Please note that PendSV could be preempted by other ISRs. This is all fine, but you need to obviously remember to protect all shared resources by a critical section of code (briefly disabling and enabling interrupts). In ARM Cortex, you disable interrupts by executing __asm("cpsid i") and you enable interrupts by __asm("cpsie i"). (Most C compilers provide built-in intrinsic functions or macros for this purpose.)
Are you using an RTOS? Generally this type of thing would be handled by having a high priority thread that gets signaled to do some work by the interrupt.
If you're not using an RTOS, you only have a few tasks, and the work being kicked off by the interrupt isn't too resource intensive, it might be simplest having your high priority work done in the context of the interrupt handler. If those conditions don't hold, then implementing what you're talking about would be the start of a basic multitasking OS itself. That can be an interesting project in its own right, but if you're looking to just get work done, you might want to consider a simple RTOS.
Since you mentioned some specifics about the work you're doing, here's an overview of how I've handled a similar problem in the past:
For handling received data over a UART one method that I've used when dealing with a simpler system that doesn't have full support for tasking (ie., the tasks are round-robined i na simple while loop) is to have a shared queue for data that's received from the UART. When a UART interrupt fires, the data is read from the UART's RDR (Receive Data Register) and placed in the queue. The trick to deal with this in such a way that the queue pointers aren't corrupted is to carefully make the queue pointers volatile, and make certain that only the interrupt handler modifies the tail pointer and that only the 'foreground' task that's reading data off the queue modified the head pointer. A high-level overview:
producer (the UART interrupt handler):
read queue.head and queue.tail into locals;
increment the local tail pointer (not the actual queue.tail pointer). Wrap it to the start of the queue buffer if you've incremented past the end of the queue's buffer.
compare local.tail and local.head - if they're equal, the queue is full, and you'll have to do whatever error handing is appropriate.
otherwise you can write the new data to where local.tail points
only now can you set queue.tail == local.tail
return from the interrupt (or handle other UART related tasks, if appropriate, like reading from a transmit queue)
consumer (the foreground 'task')
read queue.head and queue.tail into locals;
if local.head == local.tail the queue is empty; return to let the next task do some work
read the byte pointed to by local.head
increment local.head and wrap it if necessary;
set queue.head = local.head
goto step 1
Make sure that queue.head and queue.tail are volatile (or write these bits in assembly) to make sure there are no sequencing issues.
Now just make sure that your UART received data queue is large enough that it'll hold all the bytes that could be received before the foreground task gets a chance to run. The foreground task needs to pull the data off the queue into it's own buffers to build up the messages to give to the 'message processor' task.
What you are asking for is pretty straightforward on the Cortex-M3. You need to enable the STIR register so you can trigger the low priority ISR with software. When the high-priority ISR gets done with the critical stuff, it just triggers the low priority interrupt and exits. The NVIC will then tail-chain to the low-priority handler, if there is nothing more important going on.
The "more official way" or rather the conventional method is to use a priority based preemptive multi-tasking scheduler and the 'deferred interrupt handler' pattern.
Check your processor documentation. Some processors will interrupt if you write the bit that you normally have to clear inside the interrupt. I am presently using a SiLabs c8051F344 and in the spec sheet section 9.3.1:
"Software can simulate an interrupt by setting any interrupt-pending flag to logic 1. If interrupts are enabled for the flag, an interrupt request will be generated and the CPU will vector to the ISR address associated with the interrupt-pending flag."
Maybe the question should be, are external interrupts even vectored on the PowerPC at all? I've been looking at http://www.ibm.com/developerworks/eserver/library/es-archguide-v2.html, 'book 3', trying to figure out how the processor locates the appropriate interrupt service routine in response to an external interrupt. It seems to suggest that when the PPC recognizes an external interrupt, it just jumps execution to 0x0000_0500.
I may be laboring under a misconception about how the PPC works. With x86, the processor responds to interrupt requests with an interrupt acknowledge cycle, and obtains a 'vector' directly from the device. The vector (really an index) then allows the cpu to pick an appropriate handler routine from its interrupt vector table. Most importantly, this acknowledge/vector fetch is a hardware, bus-protocol thing, nobody has to write any code to make it happen. The only code that needs writing (read, software) is the ISRs themselves.
Does the PPC do something similar? Would there be a table of vectors at 0x500? Or does it do something radically different, and offload the functionality of getting the device's vector to an external interrupt controller? I suppose it could just jump to code at 0x500, where actual software would then interrogate the (hypothetical?) interrupt controller to get the vector .. and then use it in a jump-table/what-have-you, but I can't find documentation to verify this is the case, one way or another.
The PowerPC CPU has no concept of an interrupt vector table, and only provides a single interrupt pin and interrupt vector.