How are interrupts handled by dual processor machines? - hardware

I have an idea of how interrupts are handled by a dual core CPU. I was wondering about how interrupt handling is implemented on a board with more than one physical processor.
Is any of the interrupt responsibility determined by the physical board's configuration? Each processor must be able to handle some types of interrupts, like disk I/O. Unless there is some circuitry to manage and dispatch interrupts to the appropriate processor? My guess is that the scheme must be processor neutral, so that any processor and core can run the interrupt handler.
If a core is waiting on a disk read, will that core be the one to run the interrupt handler when the disk is ready?

On x86 systems each CPU gets its own local APIC (Advanced Programmable Interrupt Controller) which are also wired to each other and to an I/O APIC that handles routing device interrupts to the local APICs.
The OS can program the APICs to determine which interrupts get routed to which CPUs (or to let the APICs make that decision).
I imagine that a multi-core CPU would have a local APIC for each core, but I'm honestly not certain about that.
See these links for more details:
http://osdev.berlios.de/pic.html
http://www.microsoft.com/whdc/archive/io-apic.mspx
http://en.wikipedia.org/wiki/Intel_APIC_Architecture

What you're interested in is SMP Processor Affinity. Here is an excellent article about how it is handled in Linux. The Advanced Programmable Interrupt Controller (APIC) is how you manage this in a modern system. Basically, the default would be to all go to processor 0 unless you had an OS that utilized this interface to set things up properly. Also, you don't necessarily want the core that issued a command to wait on a particular interrupt. You want the less loaded cores to receive it.

I already asked this question a while back. Maybe it can offer you some insight :)
how do interrupts in multicore/multicpu machines work

I would say that it would depend on the hardware manufacturer...
However this link makes me believe most are probably handled by the primary processor and/or first core.
Another link

Related

Is there any problem with computing the whole program in an ISR?

I have a program that should be run periodically on a MCU (ex. STM32). For example it should be run at every 1 ms. If I program an 1ms ISR and call my complete program in it, assuming it will not exceed 1ms, is that a good way? Is there any problem that I could be facing? Will it be precise?
The usual answer would be probably that ISRs should be generally kept to a minimum and most of the work should be performed in the "background loop". (Since you are apparently using the traditional "foreground-background" architecture, a.k.a. "main+ISRs").
But ARM Cortex-M (e.g., STM32) has been specifically designed such that ISRs can be written as plain C functions. In that case, working with ISRs is no different than working with any other C code. This includes ease of debugging.
Moreover, ARM Cortex-M comes with the NVIC (Nested Vectored Interrupt Controller). The NVIC allows you to prioritize interrupts and they can preempt each other. This means that you can quite easily build sets of periodic "tasks" (ISRs), which run under the preemptive, priority-based scheduler. Interestingly, this scheduler (implemented in the NVIC hardware) meets all requirements of RMA/RMS (Rate Monotonic Analysis/Scheduling), so you could prove the schedulability of your system. Of course, the ISR "tasks" cannot block internally, but this is not required for RMA/RMS. Also, if you have any shared resources between ISRs running at different priorities (or ISR and the background loop), you need to properly protect the resources by disabling interrupts.
So, your idea of using ISRs as "tasks" makes a lot of sense to me. Your system will be optimal meaning that any other approach would be less efficient. (This includes the use of any kind of RTOS.) Also, this design can be low-power because you could use the "background loop" in main() to put your CPU and peripherals into low-power sleep mode (WFI instruction, etc.) In fact, you can view the "background loop" as the idle "task" in this "hardware-RTOS".

can a computer work with software interrupts only?

can software interrupts do some of hardware interrupts?
can it detect power failure and things and then rely only on software interrupts?
so then we wont need special hardware like interrupt controllers
While this may be technically possible I doubt you'll end up with a system that's stable or even reliable. Interrupts are especially important as hardware because they, well, interrupt the processing of other tasks asynchronously. This allows physical components at their lowest level to quickly and correctly respond to events.
Let's play out the scenario you mention and imagine a component on the motherboard detecting a power failure. Without an interrupt the best it can do is write to a register or cache. It must then rely on another piece of hardware or even the operating system to check that value. This basically means periodic polling which is not as efficient. Furthermore, if there is currently a large instruction set running that might be hogging the resources necessary to check the value you have no deterministic way of knowing when that check might occur. It could be near-instant, or it could be a second from now. If it's the latter, your computer loses power and shuts down before it can react.

Interrupts execution context

I'm trying to figure out this basic scenario:
Suppose my cpu received an exception or an interrupt. What I do know, is that the cpu starts to perform an interrupt service routine (looks at the idtr register to locate the idt table, and goes to the appropriate entry to receive the isr address), but in what context is the code running?
Meaning if I have a thread currently running and generating an interrupt of some sort, in which context will the isr run, in the initial process that "holds" the thread, or in some other magical thread?
Thanks!
Interesting question, which raises a few different issues.
The first is that interrupts don’t actually run inside of any thread from the CPU’s perspective. Indeed, the CPU itself is barely aware of threads; it may know a bit more if it has hyper threading or some similar technology, but a thread is really an operating system thing (or, sometimes, an application thing).
The second is that ISRs (Interrupt Service Routines) generally run at some elevated privilege level; you don’t really say which processor family you’re talking about, so it’s difficult to be specific, but modern processors normally have at least one special mode that they enter for handling interrupts — often with its own register bank. One might also ask, as part of your question, whose page table is active during an interrupt?
Third is the question of whose memory map ISRs have when they are entered. The answer, again, is going to be highly processor specific; it’s possible to imagine architectures that disable paging on ISR entry, other architectures that switch automatically to an interrupt page table, and (probably the most common approach) those that decide not to bother doing anything about the page table when entering an ISR.
The fourth is that some operating systems have policies of their own on these kinds of things. A common approach on modern operating systems is to make ISRs themselves as short as possible, and where any significant work needs to be done, convert the interrupt into some kind of event that can be handled by a kernel thread (or even, potentially, by a user thread). In this kind of system, the code that actually handles an interrupt may well be running in a specific thread, though it probably isn’t actually an interrupt service routine at that point.
Summary:
ISRs themselves don’t really run in the context of any given thread.
ISRs may run with the page table of the interrupted thread (depends on architecture).
ISRs may start with a copy of that thread’s registers (depends on architecture).
In modern systems, ISRs commonly try to schedule an event and then exit quickly. That event might be handled by a specific thread (e.g. for processor exceptions, it’s usually delivered as a signal or Structured Exception or similar to the thread that caused it); or by a pool of threads (e.g. to service I/O in the kernel).
If you’re interested in the specifics for x86 (I guess you are, as you use some Intel specific terms in your question), you need to look at the Intel 64 and IA-32 Architectures Software Developer’s Manual, volume 3B, and you’ll need to look at the operating system documentation. x86 is a very complicated architecture compared to some others — for instance, it can optionally perform a task switch on interrupt delivery (if you put a “task gate” in the IDT), in which case it will certainly have its own set of registers and quite possibly its own page table; even if this feature is used by a given operating system, there is no guarantee that x86 tasks map straightforwardly (or at all) to operating system processes and/or threads.

Is it true that Windows 7 does not support real time serial communication?

A colleague of mine is getting a quote from a software developer that involves serial communication, and in their quote, the developer makes the the following statement:
...Windows 7 operating system, which uses a non-real-time serial communication setup.
Is it true that Windows 7 does not support real-time serial communication? To clarify on what it is meant by "real-time," the project deals with robot automation and any delays in communication (such as from buffering) could cause damage to the product, or stop the production line. I can not find any evidence to either support or deny this claim. I don't believe it to be true though, and I think it probably has more to do with them using VB.Net for development.
The 'real-time' term used here does not actually refer to anything in the serial communications bus.
However, it does have to do with the fact that the windows multitasking scheduler is not designed to allow for realtime tasks which have hard deadlines.
See this question for some info Why is Windows not considered suitable for real time systems/high performance servers?
Lets pretend you have a particle accelerator hooked up to your computer and you have to ensure that every 10 microseconds the magnet train switches to power the next set of cells but windows decides that it's time to apply some Windows Update patches. Your photon stream wouldn't get redirected properly and could cause damage to the system.
It is a fairly nonsensical statement, Windows itself is not a real-time operating system. It cannot provide hard guarantees that user mode code is going to respond quick enough. Other than thread scheduling delays, a simple mishap like getting the pages of the process swapped to the paging file is enough to cause arbitrary delays in getting it running again. An attribute of any demand-paged virtual memory operating system. So of course a "serial communications setup" cannot be either, assuming you are not contemplating writing ring 0 kernel code. Nobody does.
It is not a practical problem, the only point of using a serial port is to talk to the controller for the robot. Which provides the real-time guarantee.
You could only get in trouble when you command the robot to make an unrestricted move and use an external sensor to get it to stop. Not uncommon when you need to find an object whose location you don't know. A decent controller knows how to do that, avoid implementing it in your Windows code. Solid overtravel protection built into the robot itself that triggers an e-stop is necessary anyway, you can't trust that sensor either.
No, Windows 7 (and in fact all of the mainstream Windows releases) are not Real-time operating systems. To clarify what is meant by a real-time operating system:
A real-time operating system (RTOS) is an operating system (OS)
intended to serve real-time application requests. It must be able to
process data as it comes in, typically without buffering delays.
Processing time requirements (including any OS delay) are measured in
tenths of seconds or shorter.
A key characteristic of an RTOS is the level of its consistency
concerning the amount of time it takes to accept and complete an
application's task; the variability is jitter.[1] A hard real-time
operating system has less jitter than a soft real-time operating
system. The chief design goal is not high throughput, but rather a
guarantee of a soft or hard performance category. An RTOS that can
usually or generally meet a deadline is a soft real-time OS, but if it
can meet a deadline deterministically it is a hard real-time OS. [2]
An RTOS has an advanced algorithm for scheduling. Scheduler
flexibility enables a wider, computer-system orchestration of process
priorities, but a real-time OS is more frequently dedicated to a
narrow set of applications. Key factors in a real-time OS are minimal
interrupt latency and minimal thread switching latency; a real-time OS
is valued more for how quickly or how predictably it can respond than
for the amount of work it can perform in a given period of time.[3]
Note that most of the time real-time operating systems are less efficient (i.e. have lower throughput), which is why none of the mainstream operating systems are real-time (e.g. real-time editions of Linux use completely different kernels) - its only worth it in cases where timing at a very precise level is absolutely critical.
Windows CE is a real-time operating system Real-Time Systems with Microsoft Windows CE 2.1

What is the best way to implement my program for Keil MCB1700 evaluation board?

I want to develop a program for MCB1700 evaluation board.
Client software of PC reads a picture from HDD.
Then it sends the picture to MCB1700 evaluation board through socket (Ethernet).
Server of MCB1700 receives pictures from PC through Socket-connection and displays it on LCD.
Also server must perform such tasks:
To save picture to USB-stick;
To read picture from USB-stick and send it to client through socket;
To send and receive information through CAN
COM-logging.
etc.
Socket connection can be implemented with help of CMSIS and RL-ARM libraries.
But, as far as I understood, in both cases sofrware has to listen for incoming TCP-connection and handle network's events in an endless loop - All examples of Keil are based on such principle.
I always thought, that it is a poor way for embedded programming to use endless loops.
Moreover, I read such interesting statement
"it is certainly possible to create real-time programs without an RTOS
(by executing one or more tasks in a loop)"
http://www.keil.com/support/man/docs/rlarm/rlarm_ar_artxarm.htm
So, as I understood, it is normal practice to execute a lot of tasks in loop?
while (1) {
task1();
task2();
...
taskN();
}
I think that it is better to handle all events by interrupts.
Is it possible to use socket conection of CMSIS and RL-ARM libraries and organize all my software by handling of interrupts?
My server (on MCB1700) has to perform a lot of tasks. I guess, I should use RTOS RTX in my software. Isn't it so? Is it better to implement my software without RTX?
Simple real-time systems often operate in a "big-loop" architecture, with some time critical events handled by interrupts. It is not a "bad" architecture, bit it is somewhat inflexible, scales poorly, and any change may affect the timing of and or all parts of the system.
It is not true that RL-TCPnet must work this way, but it is designed to run stand alone, and the examples work that way so that there are no dependencies on other libraries for the widest applicability. They are only examples and are intended to demonstrate RL-TCPnet and nothing else. In that sense the examples are not realistic and should be adapted to your requirements.
You might devise a system where all your application code runs in interrupt handlers and the network stack is services in an endless-loop in the thread context, but architecturally that is probably far worse than the "big-loop" architecture, since you may end up with very long interrupt handlers, with the higher priority ones starving and affecting the real-time response of lower priority ones. You end up with a system that is difficult to schedule satisfactorily. Moreover not all library routines are suitable for execution in an interrupt handler, so it would be rather restrictive.
It is possible to service the network stack in a low priority thread in an RTOS. The framework for such operation is described in the documentation in the section: Using RL-TCPnet with RTX kernel.. This will require you to understand and use the RL-RTX kernel library, but given your reservations about "big loop" code, and the description of tasks your system must perform, it sounds like that is what you want to do in any case. If you choose to use a different RTOS, then RL-TCPnet can work in a similar way with any RTOS.