Interrupts execution context - interrupt

I'm trying to figure out this basic scenario:
Suppose my cpu received an exception or an interrupt. What I do know, is that the cpu starts to perform an interrupt service routine (looks at the idtr register to locate the idt table, and goes to the appropriate entry to receive the isr address), but in what context is the code running?
Meaning if I have a thread currently running and generating an interrupt of some sort, in which context will the isr run, in the initial process that "holds" the thread, or in some other magical thread?
Thanks!

Interesting question, which raises a few different issues.
The first is that interrupts don’t actually run inside of any thread from the CPU’s perspective. Indeed, the CPU itself is barely aware of threads; it may know a bit more if it has hyper threading or some similar technology, but a thread is really an operating system thing (or, sometimes, an application thing).
The second is that ISRs (Interrupt Service Routines) generally run at some elevated privilege level; you don’t really say which processor family you’re talking about, so it’s difficult to be specific, but modern processors normally have at least one special mode that they enter for handling interrupts — often with its own register bank. One might also ask, as part of your question, whose page table is active during an interrupt?
Third is the question of whose memory map ISRs have when they are entered. The answer, again, is going to be highly processor specific; it’s possible to imagine architectures that disable paging on ISR entry, other architectures that switch automatically to an interrupt page table, and (probably the most common approach) those that decide not to bother doing anything about the page table when entering an ISR.
The fourth is that some operating systems have policies of their own on these kinds of things. A common approach on modern operating systems is to make ISRs themselves as short as possible, and where any significant work needs to be done, convert the interrupt into some kind of event that can be handled by a kernel thread (or even, potentially, by a user thread). In this kind of system, the code that actually handles an interrupt may well be running in a specific thread, though it probably isn’t actually an interrupt service routine at that point.
Summary:
ISRs themselves don’t really run in the context of any given thread.
ISRs may run with the page table of the interrupted thread (depends on architecture).
ISRs may start with a copy of that thread’s registers (depends on architecture).
In modern systems, ISRs commonly try to schedule an event and then exit quickly. That event might be handled by a specific thread (e.g. for processor exceptions, it’s usually delivered as a signal or Structured Exception or similar to the thread that caused it); or by a pool of threads (e.g. to service I/O in the kernel).
If you’re interested in the specifics for x86 (I guess you are, as you use some Intel specific terms in your question), you need to look at the Intel 64 and IA-32 Architectures Software Developer’s Manual, volume 3B, and you’ll need to look at the operating system documentation. x86 is a very complicated architecture compared to some others — for instance, it can optionally perform a task switch on interrupt delivery (if you put a “task gate” in the IDT), in which case it will certainly have its own set of registers and quite possibly its own page table; even if this feature is used by a given operating system, there is no guarantee that x86 tasks map straightforwardly (or at all) to operating system processes and/or threads.

Related

Is there any problem with computing the whole program in an ISR?

I have a program that should be run periodically on a MCU (ex. STM32). For example it should be run at every 1 ms. If I program an 1ms ISR and call my complete program in it, assuming it will not exceed 1ms, is that a good way? Is there any problem that I could be facing? Will it be precise?
The usual answer would be probably that ISRs should be generally kept to a minimum and most of the work should be performed in the "background loop". (Since you are apparently using the traditional "foreground-background" architecture, a.k.a. "main+ISRs").
But ARM Cortex-M (e.g., STM32) has been specifically designed such that ISRs can be written as plain C functions. In that case, working with ISRs is no different than working with any other C code. This includes ease of debugging.
Moreover, ARM Cortex-M comes with the NVIC (Nested Vectored Interrupt Controller). The NVIC allows you to prioritize interrupts and they can preempt each other. This means that you can quite easily build sets of periodic "tasks" (ISRs), which run under the preemptive, priority-based scheduler. Interestingly, this scheduler (implemented in the NVIC hardware) meets all requirements of RMA/RMS (Rate Monotonic Analysis/Scheduling), so you could prove the schedulability of your system. Of course, the ISR "tasks" cannot block internally, but this is not required for RMA/RMS. Also, if you have any shared resources between ISRs running at different priorities (or ISR and the background loop), you need to properly protect the resources by disabling interrupts.
So, your idea of using ISRs as "tasks" makes a lot of sense to me. Your system will be optimal meaning that any other approach would be less efficient. (This includes the use of any kind of RTOS.) Also, this design can be low-power because you could use the "background loop" in main() to put your CPU and peripherals into low-power sleep mode (WFI instruction, etc.) In fact, you can view the "background loop" as the idle "task" in this "hardware-RTOS".

Can a Deadlock occur with CPU as a resource?

I am on my fourth year of Software Engineering and we are covering the topic of Deadlocks.
The generalization goes that a Deadlock occurs when two processes A and B, use two resources X and Y and wait for the release of the other process resource before releasing theirs.
My question would be, given that the CPU is a resource in itself, is there a scenario where there could be a deadlock involving CPU as a resource?
My first thought on this problem is that you would require a system where a process cannot be released from the CPU by timed interrupts (it could just be a FCFS algorithm). You would also require no waiting queues for resources, because getting into a queue would release the resource. But then I also ask, can there be Deadlocks when there are queues?
CPU scheduler can be implemented in any way, you can build one which used FCFS algorithm and allowed processes to decide when they should relinquish control of CPU. but these kind of implementations are neither going to be practical nor reliable since CPU is the single most important resource an operating system has and allowing a process to take control of it in such a way that it may never be preempted will effectively make process the owner of the system which contradicts the basic idea that operating system should always be in control of the system.
As far as contemporary operating systems (Linux, Windows etc) are concerned, this will never happen because they don't allow such situations.

can a computer work with software interrupts only?

can software interrupts do some of hardware interrupts?
can it detect power failure and things and then rely only on software interrupts?
so then we wont need special hardware like interrupt controllers
While this may be technically possible I doubt you'll end up with a system that's stable or even reliable. Interrupts are especially important as hardware because they, well, interrupt the processing of other tasks asynchronously. This allows physical components at their lowest level to quickly and correctly respond to events.
Let's play out the scenario you mention and imagine a component on the motherboard detecting a power failure. Without an interrupt the best it can do is write to a register or cache. It must then rely on another piece of hardware or even the operating system to check that value. This basically means periodic polling which is not as efficient. Furthermore, if there is currently a large instruction set running that might be hogging the resources necessary to check the value you have no deterministic way of knowing when that check might occur. It could be near-instant, or it could be a second from now. If it's the latter, your computer loses power and shuts down before it can react.

What is the best way to implement my program for Keil MCB1700 evaluation board?

I want to develop a program for MCB1700 evaluation board.
Client software of PC reads a picture from HDD.
Then it sends the picture to MCB1700 evaluation board through socket (Ethernet).
Server of MCB1700 receives pictures from PC through Socket-connection and displays it on LCD.
Also server must perform such tasks:
To save picture to USB-stick;
To read picture from USB-stick and send it to client through socket;
To send and receive information through CAN
COM-logging.
etc.
Socket connection can be implemented with help of CMSIS and RL-ARM libraries.
But, as far as I understood, in both cases sofrware has to listen for incoming TCP-connection and handle network's events in an endless loop - All examples of Keil are based on such principle.
I always thought, that it is a poor way for embedded programming to use endless loops.
Moreover, I read such interesting statement
"it is certainly possible to create real-time programs without an RTOS
(by executing one or more tasks in a loop)"
http://www.keil.com/support/man/docs/rlarm/rlarm_ar_artxarm.htm
So, as I understood, it is normal practice to execute a lot of tasks in loop?
while (1) {
task1();
task2();
...
taskN();
}
I think that it is better to handle all events by interrupts.
Is it possible to use socket conection of CMSIS and RL-ARM libraries and organize all my software by handling of interrupts?
My server (on MCB1700) has to perform a lot of tasks. I guess, I should use RTOS RTX in my software. Isn't it so? Is it better to implement my software without RTX?
Simple real-time systems often operate in a "big-loop" architecture, with some time critical events handled by interrupts. It is not a "bad" architecture, bit it is somewhat inflexible, scales poorly, and any change may affect the timing of and or all parts of the system.
It is not true that RL-TCPnet must work this way, but it is designed to run stand alone, and the examples work that way so that there are no dependencies on other libraries for the widest applicability. They are only examples and are intended to demonstrate RL-TCPnet and nothing else. In that sense the examples are not realistic and should be adapted to your requirements.
You might devise a system where all your application code runs in interrupt handlers and the network stack is services in an endless-loop in the thread context, but architecturally that is probably far worse than the "big-loop" architecture, since you may end up with very long interrupt handlers, with the higher priority ones starving and affecting the real-time response of lower priority ones. You end up with a system that is difficult to schedule satisfactorily. Moreover not all library routines are suitable for execution in an interrupt handler, so it would be rather restrictive.
It is possible to service the network stack in a low priority thread in an RTOS. The framework for such operation is described in the documentation in the section: Using RL-TCPnet with RTX kernel.. This will require you to understand and use the RL-RTX kernel library, but given your reservations about "big loop" code, and the description of tasks your system must perform, it sounds like that is what you want to do in any case. If you choose to use a different RTOS, then RL-TCPnet can work in a similar way with any RTOS.

System call without context switching?

I was just reading up on how linux works in my OS-book when I came across this..
[...] the kernel is created as a single, monolitic binary. The main reason is to improve performance. Because all kernel code and data structures are kept in a single address space, no context switches are necessary when a process calls an operating-system function or when a hardware interrup is delivered.
That sounded quite amazing to me, surely it must store the process's context before running off into kernel mode to handle an interrupt.. But ok, I'll buy it for now. A few pages on, while describing a process's scheduling context, it said:
Both system calls and interrups that occur while the process is executing will use this stack.
"this stack" being the place where the kernel stores the process's registers and such.
Isn't this a direct contradiction to the first quote? Am I missinterpreting it somehow?
I think the first quote is referring to the differences between a monolithic kernel and a microkernel.
Linux being monolithic, all its kernel components (device drivers, scheduler, VM manager) run at ring 0. Therefore, no context switch is necessary when performing system calls and handling interrupts.
Contrast microkernels, where components like device drivers and IPC providers run in user space, outside of ring 0. Therefore, this architecture requires additional context switches when performing system calls (because the performing module might reside in user space) and handling interrupts (to relay the interrupts to the device drivers).
"Context switch" could mean one of a couple of things, both relevant: (1) switching from user to kernel mode to process the system call, or an involuntary switch to kernel mode to process an interrupt against the interrupt stack, or (2) switching to run another user process in user space, with a jump to kernel space in between the two.
Any movement from user space to kernel space implies saving enough user-space to return to it reliably. If the kernel-space code decides that - while you're no longer running the user-code for that process - it's time to let another user-process run, it gets in.
So at the least, you're talking 2-3 stacks or places to store a "context": hardware-interrupts need a kernel-level stack to say what to return to; user method/subroutine calls use a standard stack for getting that done. Etc.
The original Unix kernels - and the model isn't that different now for this part - ran the system calls like a short-order cook processing breakfast orders: move this over on the stove to make room for the order of bacon that just arrived, start the bacon, go back to the first order. All in kernel switching context. Was not a huge monitoring application, which probably drove the IBM and DEC software folks mad.
When making a system call in Linux, a context switch is done from user-space to kernel space (ring3 to ring0). Each process has an associated kernel mode stack, that is used by the system call. Before the system call is executed, the CPU registers of the process are stored on its user-mode stack, this stack is different from the kernel mode stack, and is the one which the process uses for user-space executions.
When a process is in kernel mode (or user mode), calling functions of the same mode will not require a context switch. This is what is referred by the first quote.
The second quote refers to the kernel mode stack, and not the user-mode stack.
Having said this, I must mention Linux optimisations, where no transition is needed to the kernel space for executing a system call, i.e. all processing related to the system call is done in the user space itself (thus no context switch). vsyscall, and VDSO are such techniques. The idea behind them is quite simple. It is to send to the user space, the data that is required for execution of the corresponding system call. More info can be found in this LWN article.
In addition to this, there have been some research projects in which all the execution happens in the same ring. User space programs, and the OS code, both reside in the same ring. Idea is to get rid of the overhead of ring switches. Microsoft's [singularity][2] OS is one such project.