what is meant by disabling interrupts? - interrupt-handling

When entering an inteerupt handler, we first "disable interrupts" on that cpu(using something like the cli instruction on x86). During the time that interrupts are disabled, assume say the user pressed the letter 'a' on the keyboard that would usually cause an interrupt. But since interrupts are disabled, does that mean that:
the interrupt handler for 'a' would never be invoked, since interrupts are disabled in the critical section or
the interrupt will be handled by the os but delayed, until interrupts are enabled again.
Specifically, will the user need to press 'a' again, if the first time he pressed 'a' was at a time when interrupts were disabled ?

Often, one interrupt is "queued" by hardware.
[An interrupt is often just a logic gate that can stick on; once it's on, it stays on for a while.]
If the user hit 'a' once only during the interval when interrupts were disabled, it would register as an interrupt when they were re-enabled.
If the user somehow managed to hit 'a' twice during the interval when interrupts were disabled, one would register as an interrupt when they where enabled. Whether it was the first or the second depends on the exact logic gate configuration.

The answer is that it depends on whether you were already handling a keyboard interrupt.
Most interrupt service routines (ISR) have code at the termination of them which informs the hardware that it has been "serviced." In the case of the keyboard controller, commands are written to it acknowledging the received bytes. It is at the time of acknowledgement that the keyboard controller hardware stops using electricity to signal an interrupt condition.
If you are handling a non-keyboard interrupt, let's say the fire alarm interrupt, then the keyboard hardware which electrically asserts the interrupt will trigger as the key is pressed. The electrical signal is ignored until the CPU has interrupts enabled again. At the end of servicing the fire alarm interrupt, the fire alarm ISR acknowledges whatever data and re-enables interrupts on the CPU. Immediately, the CPU enters an interrupt because the keyboard controller is still electrically signalling an interrupt condition.
If you are handling a keyboard interrupt, and the user quickly types a second keystroke during the execution of your keyboard ISR, then there is a chance of missing the data from the second keystroke, or of receiving it later if at all. In particular, if the ISR resets the keyboard controller through an acknowledge, but the ISR has not actually received all the available bytes out of the keyboard controller, then that is a problem.
Often, an ISR will first handle the interrupt which triggered its activation, then after acknowledging the interrupt, poll the device to see if it has received more data since the first interrupt. If so, generate a software interrupt to re-enter the ISR and service the device.

The simple answer is that an interrupt automatically disables further interrupts. Interrupts should and are disabled for the shortest time only. The first instruction in the original AT BIOS keyboard ISR was STI to enable interrupts.
The happy answer is that the PIC prioritizes the hardware interrupts and even with interrupts enabled only the timer interrupt IRQ0 can interrupt the keyboard ISR. Of course a NMI can occur either way but happily this never occurs on a current PC.

It would be physically impossible for a user to press "a" twice during the normal processing of an interrupt. It would be terribly unlikely even if he pressed two keys at a time, but the hardware should hold at least one key until the CPU is ready to get it.
On PCs--this is reaching WAY back to my PCXT days--the keyboard subsystem may hold in the area of 13 key presses for the CPU.

Disable interupt have diffrent proof.
1. Hardware fault
2.exceptions { for example: Divide by zero }
and ect.
when hardware fault occure, the os must operate blend.
when the Exception occure, os must mangae the system and switch another process to handle interupt.
Or for example : for I/O device.
if the interupt was not, the computer wasn't efficiency!

Related

Does the operation of the CAN peripheral in STM32 wait for the execution of the ISR routine code?

I'm developing a stack layer on microcontroller STM32L433 that uses the CAN protocol; a fundamental part of the stack is the authentication of the devices.
During authentication it can occur that two (or more) devices start to send a CAN message (authentication message) with the same identifier and different payload (true random value). In this case every device should be able to detect if this message was sent first from another device.
I have studied this case and three situations can occur:
the devices start to send message at the same time; in this case only one device is able to sent the message because all others devices detect one error and then abort the transmission.
only one device is able to send the message and occupy the bus before all others devices load the transmission MAILBOX of the CAN peripheral, or before the CAN peripheral of the others devices set the message that is going to be sent in the SCHEDULED state.
In this case, the devices that have not been able to send the message will receive the reception interrupt; within the ISR routine of reception I'm able to abort the transmission.
only one device are able to send the message and occupy the bus and all others CAN peripherals of others devices have message in SCHEDULED state and are waiting that bus become idle.
In this case the devices that have not been able to send the message will receive the reception interrupt. Also in this situation I thought to stop the transmission within the ISR routine of reception (like situation 2) ), but I'm not sure that this is guaranteed for all messages because if the CAN peripheral sets the message that is going to be sent in the TRANSMIT state before the code inside ISR is executed, the operation of abort will have no effect.
My question is (related to the situation 3): Is the message in the transmission MAILBOX in the SCHEDULED state set in the TRANSMISSION state after that the code in the receiving ISR routine is executed or is this thing not guaranteed?
To answer on your third case first, no it is not guaranteed that your message is not on the bus, while receiving. Because interrupts might have some latency too, and within this time, the mailbox might be able to go ahead with transmission.
Your "authentication" also sounds a bit troublesome, since nobody from outside could also actually decide which ECU was actually the one that won the arbitration and actually sent that specific message.
We have ECUs in vehicles which decide at runtime, according to certain methods, where they are mounted by pin and some CAN reception, but only in listen mode. TX is actually disabled in the stack. After that, detection has completed, we switch configurations and restart the communications stack and further initialize the software going up.
But these "setups" are usually defined beforehand, e.g. due to master/slave (vehicle/private bus communication), or maybe some connector pins connected to GND / OPEN / UBAT, or maybe some bus message which tells on which bus it is on.
That seems to be more reliable than your method.

CAN error counters and interrupts

I'm using the bxCAN peripheral of an STMF3 uC in an environment where
1.) it is essential that the node is detached from the network once the REC/TEC has reached the warning level (waiting for the bus-off condition is not an option)
2.) the baud rate of the host network is unknown
3.) the connection might be sporadic as the node is connected by the user
Due to 1.) the STM32 HAL CAN driver is used in IT mode and whenever the called with the EWG flag set, the error callback shuts down the transceiver and deinitializes the bxCAN. In case the REC is over the limit, it is easily recovered by configuring the bxCAN in silent mode, assuming there is traffic on the CAN. However, if the TEC is over the limit, the bxCAN won't be able to transmit an other frame as the error interrupt will be instantly triggered once enabled -> there we are in a deadlock.
I tried decrementing the TEC by transmitting frames in silent loopback mode but successful transmissions do not affect the TEC in this mode it seems.
I suppose the question is not specific to this peripheral but valid for other CAN implementations.
Any suggestions are welcome.
I have implemented a work-around that seems to work fine, with the following requirements:
1.) whenever the CAN error ISR is triggered, it disconnects the node from the bus (the transceiver is powered off)
2.) not all interrupt sources are enabled, only the ones that are of higher severity than the last error state (e.g. in PASSIVE state the WARNING and PASSIVE interrupts are disabled and the BUSOFF interrupt is enabled)
3.) the last error state and thus the interrupt sources are updated whenever a.) an error ISR is triggered or b.) polling the CAN peripheral with a high frequency shows change in the error state
4.) whenever attempting a connection to the bus the REC must heal in listen-only mode first. For this, traffic is required on the bus.
With these requirements implemented the node is able to fail silently but recover to normal operation.

How to program factory reset switch in a small embedded device

I am building a small embedded device. I am using a reset switch, and when this is pressed for more than 5 seconds, the whole device should reset and clear all the data and go to factory reset state.
I know what to clear when this event happens. What I want to know is how do I raise this event? I mean when switch is pressed, how do I design the system to know that 5 seconds have elapsed and I have to reset now. I need high level design with any timers and interrupts. Can some one please help me?
Depends on the device. But few rough ideas:
Possibly the device manual may say about the number of interrupts per second that is produced by "holding down the switch" (switch down). If you have this value, you can easily calculate the 5 seconds.
If not, you would need to use timer too. Start the timer when you get the first interrupt of "switch down" and count up to 5 seconds.
Note that, You should also monitor for "switch up", that is, "release of switch". I hope there will be an interrupt for that too. (Possibly with different status value).
So you should break the above loop (you shouldn't do the reset) when you see this interrupt.
Hope this helps.
Interrupt-driven means low level, close to the hardware. An interrupt-driven solution, with for example a bare metal microcontroller, would look like this:
Like when reading any other switch, sample the switch n number of times and filter out the signal bounce (and potential EMI).
Start a hardware timer. Usually the on-chip timers are far too fast to count a whole 5 seconds, even when you set it to run as slow as possible. So you need to set the timer with a pre-scale value, picked so that one whole timer cycle equals a known time unit (like for example 10 milliseconds).
Upon timer overflow, trigger an interrupt. Inside the interrupt, check that the switch is still pressed, then increase a counter. When the counter reaches a given value, execute the reset code. For example, if you get a timer overflow every 10 milliseconds, your counter should count up to 5000ms/10ms = 500.
If the switch is released before the time is elapsed, reset the counter and stop the timer interrupt.
How to reset the system is highly system-specific. You should put the system in a safe system, then overwrite your current settings by overwriting the NVM where settings is stored with some default factory settings stored elsewhere in NVM. Once that is done, you should force the processor to reset itself and reboot with the new settings in place.
This means that you must have a system with electronically-erasable NVM. Depending on the size of the data, this NVM could either be data flash on-chip in a microcontroller, or some external memory circuit.
Detecting a 5S or 30S timeout can be done using a GPIO on an interrupt.
If using an rtos,
. Interrupt would wake a thread from sleep and disables itself,
. All the thread does is count the time the switch is pressed for (you scan the switch at regular intervals)
. If the switch is pressed for desired time set a global variable/setting in eeprom which will trigger the factory reset function
. Else enable the interrupt again and put the thread to sleep
. Also, use a de-bounce circuit to avoid issues.
Also define what do you mean by factory reset?
There are two kinds in general, both cases I will help using eeprom
Revert all configurations (Low cost, easier)
In this case, you partition the eeprom, have a working configuration and factory configuration. You copy over the factory configurations to the working partition and perform a software reset
Restore complete firmware (Costly, needs more testing)
This is more tricky, but can be done with help of bootloaders that allow for flashing from eeprom/or sd card.
In this case the binary firmware blob will also be stored with the factory configuration, in the safe partition and will be used to flash controller flash and configurations.
All depends on the size/memory and cost. can be designed in many more ways, i am just laying out simplest examples.
I created some products with a combined switch to. I did so by using a capacitator to initiate a reset pulse on the reset pin of the device (current and levels limit by some resistors and/or diodes). At start-up I monitor the state of the input pin connected to the switch. I simply wait until this pin goes height with a time-out of 5 seconds. In case of a time-out I reset my configuration to default.

understanding the concept of running a program in interrupt handler

Early Cisco routers running IOS operating system enhanced their packet processing speed by doing packet switching within the interrupt handler instead in "regular" operating system process. Doing packet processing in interrupt handler ensured that context switching within operating system does not affect the packet processing. As I understand, interrupt handler is a piece of software in operating system meant for handling the interrupts. How to understand the concept of packet switching done within the interrupt handler?
use of interrupts is preferred when an event requires some immediate attention by the operating system, or a program which installed an interrupt service routine. This as opposed to polling, where software checks periodically whether a condition exists, which indicates that the event has occurred.
interrupt service routines aren't commonly meant to do a lot of work themselves. They are rather written to reach their end as quickly as possible, so that normal execution can resume. "normal execution" meaning, the location and state previous processing was interrupted when the interrupt occurred. reason is that it must be avoided that the same interrupt occurs again while its handler is still executed, or it may be ignored, or lead to incorrect results, or even worse, to software failure (crashes). So what an interrupt service routine usually does is, reading any data associated with that event and storing it in a queue, signalling that the queue experienced mutation, and setting things such that another interrupt may occur, then resume by restoring pre-interrupt context. the queued data, associated with that interrupt, can now be processed asynchronously, without risking that interrupts pile up.
The following is the procedure for executing interrupt-level switching:
Look up the memory structure to determine the next-hop address and outgoing interface.
Do an Open Systems Interconnection (OSI) Layer 2 rewrite, also called MAC rewrite, which means changing the encapsulation of the packet to comply with the outgoing interface.
Put the packet into the tx ring or output queue of the outgoing interface.
Update the appropriate memory structures (reset timers in caches, update counters, and so forth).
The interrupt which is raised when a packet is received from the network interface is called the "RX interrupt". This interrupt is dismissed only when all the above steps are executed. If any of the first three steps above cannot be performed, the packet is sent to the next switching layer. If the next switching layer is process switching, the packet is put into the input queue of the incoming interface for process switching and the interrupt is dismissed. Since interrupts cannot be interrupted by interrupts of the same level and all interfaces raise interrupts of the same level, no other packet can be handled until the current RX interrupt is dismissed.
Different interrupt switching paths can be organized in a hierarchy, from the one providing the fastest lookup to the one providing the slowest lookup. The last resort used for handling packets is always process switching. Not all interfaces and packet types are supported in every interrupt switching path. Generally, only those that require examination and changes limited to the packet header can be interrupt-switched. If the packet payload needs to be examined before forwarding, interrupt switching is not possible. More specific constraints may exist for some interrupt switching paths. Also, if the Layer 2 connection over the outgoing interface must be reliable (that is, it includes support for retransmission), the packet cannot be handled at interrupt level.
The following are examples of packets that cannot be interrupt-switched:
Traffic directed to the router (routing protocol traffic, Simple Network Management Protocol (SNMP), Telnet, Trivial File Transfer Protocol (TFTP), ping, and so on). Management traffic can be sourced and directed to the router. They have specific task-related processes.
OSI Layer 2 connection-oriented encapsulations (for example, X.25). Some tasks are too complex to be coded in the interrupt-switching path because there are too many instructions to run, or timers and windows are required. Some examples are features such as encryption, Local Area Transport (LAT) translation, and Data-Link Switching Plus (DLSW+).
More here: http://www.cisco.com/c/en/us/support/docs/ios-nx-os-software/ios-software-releases-121-mainline/12809-tuning.html

How can the processor recognize the device requesting the interrupt?

1) How can the processor recognize the device requesting the interrupt?
2) Given that different devices are likely to require different ISR, how can the processor obtain the starting address in each case?
3) Should a device be allowed to interrupt the processor while another interrupt is being serviced?
4) How should two or more simultaneous interrupt requests be handled?
1) How can the processor recognize the device requesting the interrupt?
The CPU has several interrupt lines, and if you need more devices than there are lines there's an "interrupt controller" chip (sometimes called a PIC) which will multiplex several devices and which the CPU can interrogate.
2) Given the different devices are likely to require different ISR How can the pressor obtain the starting address in each case?
That's difficult. It may be by convention (same type of device always on the same line); or it may be configured, e.g. in the BIOS setup.
3) Should a device be allowed to interrupt the processor while amother interrupt is being services?
When there's an interrupt, further interrupts are disabled. However the interrupt service routine (i.e. the device-specific code which the CPU is executing) may, if it's willing, reenable interrupts if it's willing to be interrupted.
4) How should two or more simultanement interrupt requests be handled?
Each interrupt has a priority: the higher-priority interrupt is handled first.
The concept of defining the priority among devices so as to know which one is to be serviced first in case of simultaneous requests is called priority interrupt system. This could be done with either software or hardware methods.
SOFTWARE METHOD – POLLING
In this method, all interrupts are serviced by branching to the same service program. This program then checks with each device if it is the one generating the interrupt. The order of checking is determined by the priority that has to be set. The device having the highest priority is checked first and then devices are checked in descending order of priority.
HARDWARE METHOD – DAISY CHAINING
The daisy-chaining method involves connecting all the devices that can request an interrupt in a serial manner. This configuration is governed by the priority of the devices. The device with the highest priority is placed first.