How often do operating systems poll key inputs? - input

Is it at each screen refresh or exactly when keys are pressed (through interrupts etc.)?

That largely depends on the device. There were effectively three generations of devices:
Polling
Character interrupts. Each key press generates an input.
Programmed interrupts. The device is configurable so that it only generates an interrupt when necessary. For example, some terminal devices support programming such that the user can enter a string of characters (and even edit those characters) and there is only an interrupt when the user hits <RETURN>.

On all non-trivial systems, I/O events are signaled by hardware interrupts that cause drivers to be run. No polling is required or desired.
Any thread waiting on KB input will be made ready, and hopefully running, when the KB driver exits. It can then handle the KB event.

Related

Understanding how operating systems store/retrieve IO device input

I am a bit confused on how I/O devices like keyboards store their input for use by the operating system or an application. If I have a computer with a single processor (a CPU with a single core), and the current executing process is a game, how is the game able to be "aware" of keyboard input? Even if a key press were to force a hardware interrupt (and thus context switch), and then "feed" the key value to the game whenever the OS gives control back to the game process, there's no guarantee that the game loop would even be checking for player input at that time, it could just as well be updating game object positions or rendering the game world.
So my question is this...
Do I/O devices like keyboards store there input to some kind of on-board hardware specific microprocessors or on-board memory storage buffer queues, which can later be "read" and flushed by external processes or the OS itself? Any insight is greatly appreciated, thanks!
Do I/O devices like keyboards store there input to some kind of on-board hardware specific microprocessors or on-board memory storage buffer queues, which can later be "read" and flushed by external processes or the OS itself?
Let's split it into 3 parts..
The Device Specific Part
For old keyboards (before USB); a microcontroller in the keyboard regularly scans a grid of switches and detects when a key is pressed or released, converts that into a code (which could be multiple bytes), then sends the code on byte at a time to the computer. The computer also has a little microcontroller to receive these bytes. That microcontroller has a 1 byte buffer (not even big enough for multi-byte codes).
For newer keyboards (USB); the keyboard's internals a mostly the same (microcontroller scanning a grid of switches, etc); but USB controller asks the keyboard "Anything happen?" regularly (typically every 8 milliseconds) and the keyboard's microcontroller replies.
In any case; the keyboard driver gets the code that came from the keyboard and processes it; typically converting it into a fixed length "key code", merging it with other data (if shift or capslock or... was active at the time; if there's unicode codepoint/s that make sense for the key, etc) and bundles all that into a data structure.
The OS Specific Part
That data structure (created by the keyboard driver) is typically standardized by the OS as a "user input event" (so, same "event" data structure for keyboard, mouse, touchscreen, joystick, ...).
That "user input event" is sent from driver via. some form of inter-process communication (pipes, messages, ...) to something else (e.g. GUI). This inter-process communication has 2 common behaviors - if the receiving program is blocked waiting to receive an event then the scheduler unblocks it (cancels the waiting) and schedules it to get CPU time again; and if the receiving program isn't waiting the event is often put on a queue (in memory) of pending events.
Of course often there's many processes involved, and the "user input event" might be forwarded from one process (e.g. input method editor) to another process (e.g. GUI) to another process (e.g. whichever window has keyboard focus). Also (for old legacy command line stuff) it might end up at a translation layer (e.g. terminal emulator) that converts the events into a character stream (stdin) while destroying most of the information (e.g. when a key is released).
The Language Specific Part
To get the event from high level code, it depends what the language is and sometimes also which library is being used. The most common is some kind of "getEvent()" that causes the program to fetch the next event from its queue (from memory); and may cause the program to wait (and not use any CPU time) if there isn't any event get yet. However, often that is buried further, such that you register a callback and then when something else calls "getEvent()" and when it receives an even it calls the callback you registered; so it might end up like (e.g. for Java) public boolean handleEvent(Event evt) { switch (evt.id) { case Event.KEY_PRESS: ....
The keyboards are mostly USB today. On most computers, including ARM computers, you have a USB controller implementing the xHCI specification developed by several major tech companies. If you google "xhci intel spec" or something similar, the first link or so should be a link to the full specification.
The xHCI spec requires implementations to be in the form of a PCI device. PCI is another spec which is developed by the PCI-Seg group. This spec specifies everything down to hardware requirements. It is not a free spec like xHCI. It is actually quite expensive to obtain (around 3k$).
The OS detects PCI devices using ACPI or similar specifications which can sometimes be board specific (especially for ARM because all x86 based computers implement ACPI). ACPI tables, found at conventionnal positions of RAM, mention where to find base addresses of the memory mapped configuration space of each PCI device.
The OS interacts with PCI devices using registers that are memory mapped in RAM. The OS reads/writes at the positions specified by ACPI tables and by the configuration spaces themselves to gather information about a certain device and to make the device do operations on its behalf. The configuration space of a PCI device have a general portion (the same for every device) and a more specific portion (device dependent). In the general portion, there are BAR registers that contain the address of the device dependent portion of the config space. Each implementer of the convention can do whatever they want with the device specific portion. The general portion must be somewhat similar for every device so that the OS can properly detect and initialize the device.
Today, each type of device have to respect a certain convention to work with current computers. For example, hard-disks will implement SATA and the board will have an AHCI (a PCI SATA controller). Similarly, keyboards will implement USB and the board has an xHCI.
The xHCI itself have complex interaction mechanisms. A summary for keyboards, is that the OS will "activate" the interrupt IN endpoint of the keyboard. The OS will then place transfer requests (TRBs) on a circular ring in RAM. For interrupt endpoints, the xHCI will read one transfer request per time interval specified by the OS in a specific xHCI register. When the xHCI executes a TRB (when it does the transfer), it will post an event to another circular ring in RAM called the Event Ring. When a new event is posted, the xHCI triggers an interrupt. In the interrupt handler, the OS will read the Event Ring to determine what happened. For a keyboard, the OS will probably see that a transfer was done and read the data. Most often, keyboards will return 8 bytes. Interpretation of each byte is the keys that were pressed at the moment of the transfer. The bytes contain conventional scancodes. So they are not directly in UTF-8 or ASCII format. There is one scancode per keyboard key. It is up to the OS to determine what to do depending on the keyboard key. For example, if the data says that the 'A' key is pressed, then the OS can look if the 'SHIFT' key is pressed to determine if the 'A' should be uppercase or lowercase. If the next report says that the 'A' key is not pressed than the OS should consider this as a release of the key (the user released the key). In other words, the keyboard is polled by the OS at certain intervals.
The interrupt handler will probably pass the key to other portions of the kernel and save the key to a process specific structure. The process will probably poll a lock protected buffer that will contain events. It could also be a system wide buffer that simply contains all events.
The higher level implementation probably varies between OS. If you understand the lower level inner workings than you can probably imagine how an OS will work. It is certain that the whole things is very complex because CPUs nowadays have several cores and caches and other complex mechanisms. The OS must make sure to work seamlessly with all these mechanisms so it is quite complex but achievable. To understand the whole thing, you'd probably need to understand the whole OS. It has lots of ramifications in other OS concepts.

How would an ISR know what pin cause the interrupt?

Interrupts can be enabled for a specific pin(s) on a digital I/O port, correct? How would the ISR determine which pin caused the interrupt?
Because the vector table has only one slot for the Port1 ISR. So the same ISR function gets called no matter which input pin on Port1 needs attention unless I'm wrong...
As other people have suggested in comments this can be MCU dependent, but for ARM(The core behind MSP432) generally the answer is it doesnt know, it looks for it.
ARM has a vectored interrupt system, which means that every source has its own vector of interrupt, so CPU can easily find out which source is triggering thr interrupt. so far so good.
but then it happens that a device can trigger multiple interrupts, like GPIO as you said, in this case, CPU knows that which port has triggered interrupt so fires it's ISR but then it is ISR responsibility to poll device registers to figure out exact interrupt source, there are many of this peripherals with multiple interrupt, timers, DMAs just to name a few.
This is exactly why normally peripherals have an interrupt enable bit, that lets them trigger interrupts, but they also have bit masks that controls what exactly can trigger that interrupt internally,
Also have a look at this link for an in action example, specially at their ISR that does exactly the same as described above
In a typical MCU, there are hundreds, or at a stretch even thousands of potential interrupt sources. Depending on the application, only some will be important, and even fewer will be genuinely timing critical.
For a GPIO port, you typically enable only the pins which are interesting to generate an interrupt. If you can arrange only one pin of a port to be generating the interrupt, the job is done, your handler for that port can do the work, safely knowing that it will only be called when the right pin is active.
When you care about the cause within a single peripheral, and don't have the luxury of individually vectored handlers, you need to fall back on the 'non vectored' approach, and check the status registers before working out which eventual handler function needs to be called.
Interestingly, you can't work out which pin caused the interrupt - all you can see is which pins are still active once you get round to polling the status register. If you care about the phasing between two pulses, you may not be able to achieve this discrimination within a single GPIO unless there is dedicated hardware support. Even multiple exception vectors wouldn't help, unless you can be sure that the first exception is always taken before the second pin could become set.

Operating System Basics

I am reading process management,and I have a few doubts-
What is meant by an I/o request,for E.g.-A process is executing and
hence it is in running state,it is in waiting state if it is waiting
for the completion of an I/O request.I am not getting by what is meant by an I/O request,Can you
please give an example to elaborate.
Another doubt is -Lets say that a process is executing and suddenly
an interrupt occurs,then the process stops its execution and will be
put in the ready state,is it possible that some other process began
its execution while the interrupt is also being processed?
Regarding the first question:
A simple way to think about it...
Your computer has lots of components. CPU, Hard Drive, network card, sound card, gpu, etc. All those work in parallel and independent of each other. They are also generally slower than the CPU.
This means that whenever a process makes a call that down the line (on the OS side) ends up communicating with an external device, there is no point for the OS to be stuck waiting for the result since the time it takes for that operation to complete is probably an eternity (in the CPU view point of things).
So, the OS fires up whatever communication the process requested (call it IO request), flags the process as waiting for IO, and switches execution to another process so the CPU can do something useful instead of sitting around blocked waiting for the IO request to complete.
When the external device finishes whatever operation was requested, it generates an interrupt, so the OS is informed the work is done, and it can then flag the blocked process as ready again.
This is all a very simplified view of course, but that's the main idea. It allows the CPU to do useful work instead of waiting for IO requests to complete.
Regarding the second question:
It's tricky, even for single CPU machines, and depends on how the OS handles interrupts.
For code simplicity, a simple OS might for example, whenever an interrupt happens process the interrupt in one go, then resume whatever process it decides it's appropriate whenever the interrupt handling is done. So in this case, no other process would run until the interrupt handling is complete.
In practice, things get a bit more complicated for performance and latency reasons.
If you think about an interrupt lifetime as just another task for the CPU (From when the interrupt starts to the point the OS considers that handling complete), you can effectively code the interrupt handling to run in parallel with other things.
Just think of the interrupt as notification for the OS to start another task (that interrupt handling). It grabs whatever context it needs at the point the interrupt started, then keeps processing that task in parallel with other processes.
I/O request generally just means request to do either Input , Output or both. The exact meaning varies depending on your context like HTTP, Networks, Console Ops, or may be some process in the CPU.
A process is waiting for IO: Say for example you were writing a program in C to accept user's name on command line, and then would like to print 'Hello User' back. Your code will go into waiting state until user enters their name and hits Enter. This is a higher level example, but even on a very low level process executing in your computer's processor works on same basic principle
Can Processor work on other processes when current is interrupted and waiting on something? Yes! You better hope it does. Thats what scheduling algorithms and stacks are for. However the real answer depending on what Architecture you are on, does it support parallel or serial processing etc.

polling hardware button's state

I need to implement the following feature for my device running embedde linux on a 200mhz MIPS cpu:
1) if a reset button is pressed and held for less then a second - proceed with reboot
2) if a reset button is pressed and held for at least 3 sec. - restore the system's configuration with default values from NVRAM and then reboot.
I'm thinking of two ways:
1) a daemon that constantly polls the button's state with proper timings via GPIO ioctls
(likely too big overhead, lots of context switching ?)
2) simple char driver polling the button, measuring timings and reporting the state, for example, via /proc to user space where daemon or a shell script can check and do what's required.
And for both cases I have no idea how to measure the time :(
What would you suggest/recommend?
You have to implement those in hardware. The purpose of the "restore defaults from NVRAM" is to restore a so-called "bricked" device.
For example, what if an NVRAM seting is modified (cosmic ray?) such that the device cannot boot? In that case, your proposed button-polling daemon will never execute.
For the one-second held reboot, use an RC (resistor + capacitor) circuit to "debounce" the button press. Select an RC time constant which is appropriate for the one second delay. Use a comparator watching the RC voltage to signal the RESET pin on the MIPS cpu.
For the three-second press functionality (restore NVRAM defaults), you have to do something more complicated, probably.
One possibility is to put a tiny PIC microcontroller into the reset circuit, but only use a microcontroller with fuse (non-erasable) ROM, not NVRAM.
An easier possibility is to have a ROM containing defaults on the same circuit and bus as the NVRAM. A J/K flip-flop latch can become part of your reset circuitry. You'll also need a three-second-tuned RC circuit and comparator. On sub-three-second presses, the flip-flop should latch a 0 output and on three-second-plus presses, the 2nd RC circuit should trigger the comparator after 3 seconds and present a 1 to the J/K latch, which will toggle its output.
The flip-flop output Q will store the single bit telling your circuit whether this reset cycle was subsequent to a three-second push. If so, that output Q is driving the chip select to the NVRAM and Q* is driving the chip select to ROM. (I assume chip select is negative logic on both NVRAM and ROM chips.)
Then when your CPU boots, it will fetch the settings from either the NVRAM or the ROM, depending on the chip select line.
Your boot code can detect that it booted with ROM chip select, and can later reset the J/K flip-flop with a GPIO line. Then the CPU will be able to write good values back into the NVRAM. That unbricks the device, hopefully.
You want to use ROM that is not erasable or reusable. That kind of ROM is the most resistant to static electricity, power supply trouble, and radiation. Radiation is much more present than we generally realize, and the amount of cosmic ray flux is multiplied by taking a device onboard an airliner, for example.
I am not familiar with the MIPS processor and the GPIO/interrupt capabilities of the pin that you could be using but a possible methodology could be as follows.
Configure the input pin as an interrupt input.
When the interrupt fires disable the interrupt and start a short 100ms-ish timer
When the timer triggers check that the button is still pressed (for debounce). If it is not then re-enable the GPIO interrupt and restart, otherwise set the timer to re-trigger after the 3 second timeout.
When the timer triggers this time then if the button is not pressed then do your reboot otherwise reset the system configuration and reboot.
If the pin cannot provide an interrupt then step 1 will be a polling task to look at the input.
The time between the reset button being pressed and the full reset process being run will always be 3 seconds from a debounced button press. In a reset situation this may not be important particularly if as part of step 3 you make it apparent to the user that a reset sequence has started - blank the display for example.
If you want to do this in software, you need to put this in kernel (interrupt) code, rather than in a shell script or daemon. The better approach would be to put this in hardware.
In my experience, the likely reason for resetting the device will be bad user code which has locked or bricked the processor. If the issue is a corruption of memory due to RF energy or something of that nature, you may require hardware or an external (hardened) processor to reflash the device and fix the problem.
In the bad-user-code case, processor interrupts and kernel code should continue to run, while user code may be completely stalled. If you can poll the pin from an interrupt, you stand a much better chance of actually getting the reset you expect. Also, this enables you to do event-driven programming, rather than constantly polling the pin.
One other approach (not to the specs you listed, but a popular method for achieving the same goal) would be to have the startup routines check a GPIO line, and hold a button down when you want to re-initialize the device. On most embedded Linux devices which I've seen, the "Reset" button is wired to a dedicated reset pin on the microcontroller, and not to a GPIO pin. You may have to go with this route, unless you want to start cutting traces.

How CPU finds ISR and distinguishes between devices

I should first share all what I know - and that is complete chaos. There are several different questions on the topic, so please don't get irritated :).
1) To find an ISR, CPU is provided with a interrupt number. In x86 machines (286/386 and above) there is a IVT with ISRs in it; each entry of 4 bytes in size. So we need to multiply interrupt number by 4 to find the ISR. So first bunch of questions is - I am completely confused in mechanism of CPU receiving the interrupt. To raise an interrupt, firstly device shall probe for IRQ - then what ? The interrupt number travels "on IRQ" towards CPU? I also read something like device putting ISR address on data bus ; whats that then ? What is the concept of devices overriding the ISR. Can somebody tell me few example devices where CPU polls for interrupts? And where does it finds ISR for them ?
2) If two devices share an IRQ (which is very much possible), how does CPU differs amongst them ? What if both devices raise an interrupt of same priority simultaneously. I got to know there will be masking of same type and low priority interrupts - but how this communication happens between CPU and device controller? I studied the role of PIC and APIC for this problem, but could not understand.
Thanks for reading.
Thank you very much for answering.
CPUs don't poll for interrupts, at least not in a software sense. With respect to software, interrupts are asynchronous events.
What happens is that hardware within the CPU recognizes the interrupt request, which is an electrical input on an interrupt line, and in response, sets aside the normal execution of events to respond to the interrupt. In most modern CPUs, what happens next is determined by a hardware handshake particular to the type of CPU, but most of them receive a number of some kind from the interrupting device. That number can be 8 bits or 32 or whatever, depending on the design of the CPU. The CPU then uses this interrupt number to index into the interrupt vector table, to find an address to begin execution of the interrupt service routine. Once that address is determined, (and the current execution context is safely saved to the stack) the CPU begins executing the ISR.
When two devices share an interrupt request line, they can cause different ISRs to run by returning a different interrupt number during that handshaking process. If you have enough vector numbers available, each interrupting device can use its own interrupt vector.
But two devices can even share an interrupt request line and an interrupt vector, provided that the shared ISR is clever enough to go back to all the possible sources of the given interrupt, and check status registers to see which device requested service.
A little more detail
Suppose you have a system composed of a CPU, and interrupt controller, and an interrupting device. In the old days, these would have been separate physical devices but now all three might even reside in the same chip, but all the signals are still there inside the ceramic case. I'm going to use a powerPC (PPC) CPU with an integrated interrupt controller, connected to a device on a PCI bus, as an example that should serve nicely.
Let's say the device is a serial port that's transmitting some data. A typical serial port driver will load bunch of data into the device's FIFO, and the CPU can do regular work while the device does its thing. Typically these devices can be configured to generate an interrupt request when the device is running low on data to transmit, so that the device driver can come back and feed more into it.
The hardware logic in the device will expect a PCI bus interrupt acknowledge, at which point, a couple of things can happen. Some devices use 'autovectoring', which means that they rely on the interrupt controller to see to it that the correct service routine gets selected. Others will have a register, which the device driver will pre-program, that contains an interrupt vector that the device will place on the data bus in response to the interrupt acknowledge, for the interrupt controller to pick up.
A PCI bus has only four interrupt request lines, so our serial device will have to assert one of those. (It doesn't matter which at the moment, it's usually somewhat slot dependent..) Next in line is the interrupt controller (e.g. PIC/APIC), that will decide whether to acknowledge the interrupt based on mask bits that have been set in its own registers. Assuming it acknowledges the interrupt, it either then obtains the vector from the interrupting device (via the data bus lines), or may if so programmed use a 'canned' value provided by the APIC's own device driver. So far, the CPU has been blissfully unaware of all these goings-on, but that's about to change.
Now it's time for the interrupt controller to get the attention of the CPU core. The CPU will have its own interrupt mask bit(s) that may cause it to just ignore the request from the PIC. Assuming that the CPU is ready to take interrupts, it's now time for the real action to start. The current instruction usually has to be retired before the ISR can begin, so with pipelined processors this is a little complicated, but suffice it to say that at some point in the instruction stream, the processor context is saved off to the stack and the hardware-determined ISR takes over.
Some CPU cores have multiple request lines, and can start the process of narrowing down which ISR runs via hardware logic that jumps the CPU instruction pointer to one of a handful of top level handlers. The old 68K, and possibly others did it that way. The powerPC (and I believe, the x86) have a single interrupt request input. The x86 itself behaves a bit like a PIC, and can obtain a vector from the external PIC(s), but the powerPC just jumps to a fixed address, 0x00000500.
In the PPC, the code at 0x0500 is probably just going to immediately jump out to somewhere in memory where there's room enough for some serious decision making code, but it's still the interrupt service routine. That routine will first go to the PIC and obtain the vector, and also ask the PIC to stop asserting the interrupt request into the CPU core. Once the vector is known, the top level ISR can case out to a more specific handler that will service all the devices known to be using that vector. The vector specific handler then walks down the list of devices assigned to that vector, checking interrupt status bits in those devices, to see which ones need service.
When a device, like the hypothetical serial port, is found wanting service, the ISR for that device takes appropriate actions, for example, loading the next FIFO's worth of data out of an operating system buffer into the port's transmit FIFO. Some devices will automatically drop their interrupt request in response to being accessed, for example, writing a byte into the transmit FIFO might cause the serial port device to de-assert the request line. Other devices will require a special control register bit to be toggled, set, cleared, what-have-you, in order to drop the request. There are zillions of different I/O devices and no two of them ever seem to do it the same way, so it's hard to generalize, but that's usually the way of it.
Now, obviously there's more to say - what about interrupt priorities? what happens in a multi-core processor? What about nested interrupt controllers? But I've burned enough space on the server. Hope any of this helps.
I Came over this Question like after 3 years.. Hope I Can help ;)
The Intel 8259A or simply the "PIC" has 8 pins ,IRQ0-IRQ7, every pin connects to a single device..
Lets suppose that u pressed a button on the keyboard.. the voltage of the IRQ1 pin, which is connected to the KBD, is High.. so after the CPU gets interrupted, acknowledge the Interrupt bla bla bla... the PIC does simply add 8 to the number of the IRQ line so IRQ1 means 1+8 which means 9
SO the CPU sets its CS and IP on the 9th entry in the vector table.. and because the IVT is an array of longs it just multiply the number of cells by 4 ;)
CPU.CS=IVT[9].CS
CPU.IP=IVT[9].IP
the ESR deals with the device through the I/O ports ;)
Sorry for my bad english .. am an Arab though :)