Distributed interrupts - interrupt

I'm just looking for info on how the os handles interrupts in a distributed environment.

Not all all if you are referring to hardware interrupts like NMI. Distributed Computing always works on a highler level.

Related

Will semaphore corrupt data transmission of peripherals like UART in a microcontroller?

Semaphore disables interrupts and so will this cause other operations like receiving data on SPI to get corrupt?
Disabling interrupts cannot corrupt the data on the hardware interface.
The problem is if the data is received by the hardware peripheral and then the it raises an interrupt to have the processor collect the data then this will be delayed. If it is delayed for too long then potentially more data will have been received. Depending on the peripheral, either the new data or the old data will have to be discarded. Either way stream of data will be incomplete.
In most cases it is difficult to predict or test how long it is safe to disable interrupts for, so if possible it is best to avoid turning interrupts off.
If the peripheral includes a FIFO buffer, then the length of time that it is safe to disable interrupts for may be increased (although still difficult to predict).
Most modern microcontrollers have many ways to avoid disabling interrupts:
A better approach is to have the peripheral transfer the data to memory with DMA, so no interrupt is required at all.
Most modern processor cores provide ways to implement a semaphore do not even need to disable interrupts.
There's no standard way of implementing a semaphore. To disable all interrupts on the MCU is one way to do it, but it's a very poor amateur way of doing so. Because in more complex applications with multiple interrupts, this will make all real-time considerations and calculations a nightmare.
It creates subtle but severe bugs. Particularly when some quack has done so from deep inside some driver code. You import the driver into your project and suddenly previously working code breaks. In particular, be very careful about using various libs provided by silicon vendors - they are often of very poor quality.
There are better ways to do it, including:
Ensuring atomic access of shared variables, which can only be done with inline assembler or C11 _Atomic if supported.
Disabling one specific interrupt for a specific hardware peripheral, if it is possible to do do given the real-time considerations. Then this should be handled by the driver for that hardware peripheral in the form of setter/getter functions.
Use a "poor man's semaphore" in the form of a plain flag variable, by relying on the interrupt mechanism of the MCU blocking all other interrupts while the ISR is executing. Example.

Imbalanced IRQs on virtio devices

I noticed in the top on my linux server that one cpu had far more number of software interrupts than all other 7 cores. Digging further, I noticed that this core is pinned to a particular irq which happens to be a virtio device. Infact each core has an affinity toward a particular virtio device
virtio0-config
virtio0-control
virtio0-event
virtio0-request
virtio2-config
virtio2-input.0
virtio2-output.0
virtio3-config
virtio3-input.0
virtio3-output.0
virtio4-config
virtio4-input.0
virtio4-output.0
In this list, virtio4-input.0 has in particular very high number of interrupts and I am not able to figure out what is special about this particular device. any clues will be very helpful.. The machine in question is a nutanix VM running on a linux host.
iirc, that's your virtualised KVM network device (virto4), and -input.0 is it's input queue. I don't know why, but it the interrupts appear to only be handled by one CPU. You can read more on someone's investigation, and attempts to spread the IRQ handling over multiple CPUs here:
http://www.9bitwizard.eu/packets-part-2

In gnuradio, how much work is done in the fpga?

Some of these ettus boxes have some serious (& seriously expensive) FPGA's in them. Seems like a waste if all they do is pass data from the ADC to the ethernet bus. When I build something in GRC how much signal processing is done in the FPGA & how much is done by my PC?
GNU Radio itself is host software. So, all the processing you program in GNU Radio is done on your CPUs, unless you use special hardware accelerator blocks, for example:
gr-theano: GPU accelleration
gr-fosphor: OpenCL-accelerated Waterfall spectrogram
gr-ettus: Employing RFNoC to implement specific functionality on the X3x0's FPGA. This requires you to build an FPGA image including the functionality you use as gr-ettus block.
Generally, the FPGA in the X3x0 already does a lot: physically, the ADC and DAC of the X3x0 are running at 200MHz by default, and you can select integer fractions of that as "user sampling rate"; the interpolation/decimation from/to that rate to match these hardware clocks is done in the FPGA with relatively large filters. Also, you can digitally shift your signal in frequency by setting a digital tuning offset, which is also done by a CORDIC in the FPGA.

Microcontroller interrupt priority changing

Is it recommended to change interrupt priority setting? I know for example Texas instrument MSP430 board has a hard-wired vector table so it is not possible to change them. Some architectures support static or dynamic priority selection, but as far as I know it is not recommended. What are disadvantages of changing the priorities?
Interrupt priority affects the scheduling and pre-emption of of interrupts; this can be critical in some hard real-time systems with dead-lines in the order of microseconds or less. Since Linux is generally unsuited to such applications in any case, if interrupt priority matters to the correct operation of your application you probably would not use Linux.

How are interrupts handled by dual processor machines?

I have an idea of how interrupts are handled by a dual core CPU. I was wondering about how interrupt handling is implemented on a board with more than one physical processor.
Is any of the interrupt responsibility determined by the physical board's configuration? Each processor must be able to handle some types of interrupts, like disk I/O. Unless there is some circuitry to manage and dispatch interrupts to the appropriate processor? My guess is that the scheme must be processor neutral, so that any processor and core can run the interrupt handler.
If a core is waiting on a disk read, will that core be the one to run the interrupt handler when the disk is ready?
On x86 systems each CPU gets its own local APIC (Advanced Programmable Interrupt Controller) which are also wired to each other and to an I/O APIC that handles routing device interrupts to the local APICs.
The OS can program the APICs to determine which interrupts get routed to which CPUs (or to let the APICs make that decision).
I imagine that a multi-core CPU would have a local APIC for each core, but I'm honestly not certain about that.
See these links for more details:
http://osdev.berlios.de/pic.html
http://www.microsoft.com/whdc/archive/io-apic.mspx
http://en.wikipedia.org/wiki/Intel_APIC_Architecture
What you're interested in is SMP Processor Affinity. Here is an excellent article about how it is handled in Linux. The Advanced Programmable Interrupt Controller (APIC) is how you manage this in a modern system. Basically, the default would be to all go to processor 0 unless you had an OS that utilized this interface to set things up properly. Also, you don't necessarily want the core that issued a command to wait on a particular interrupt. You want the less loaded cores to receive it.
I already asked this question a while back. Maybe it can offer you some insight :)
how do interrupts in multicore/multicpu machines work
I would say that it would depend on the hardware manufacturer...
However this link makes me believe most are probably handled by the primary processor and/or first core.
Another link