Virtual Processors and Logical Partitions - processors

I basically wanted to know what exactly a virtual processor is. At IBM's site they define it as:
"A virtual processor is a representation of a physical processor core to the operating system of a logical partition that uses shared processors. "
I understand that if there are x processors, each of which can simultaneously perform two operations, then the system can perform 2x operations simultaneously. But where does virtual processor fit into this. And i tried looking up the difference between a logical partition and other partitions such as primary but wasn't really sure.

I'd like to draw an analogy between virtual memory and virtual processors.
Start with expectations:
A user program is written against a set of expectation about what the memory looks like (an a nice flat, large, continuous memory model is the best...)
An OS system is written against a set of expectation of how the hardware performs (what CPU protection modes operation are available, how interrupts arrive and are blocked and handled, how to talk to IO devices, etc...)
Realize that expectation can be met directly by the hardware, or by an abstraction layer
Virtual memory is a set of (specialized, not found in simple chips) hardware tools and OS services that fake a user program into thinking that it has that nice, flat, large, continuous memory space, even while the OS is busily dividing the real memory into little piece, and storing some of them on disk, bringing other back, and otherwise making a real hash of it. But your code doesn't care. Everything just works.
A virtual processor system is a set of (specialized, not found in consumer CPUs) hardware tools and hypervisor services that allow your OS to believe it has direct access to one or more processors with the expected protection modes, interrupts, etc. even though the hypervisor is busily swapping whole OS contexts onto and off of one or more real processors, starting and stopping access to IO busses, and so on and so forth. But the OS doesn't care. Everything just works.
The hardware support to do this is has only recently started to be available in "desktop" CPUs, but Big Iron has had it for ages. It is useful for a couple of reasons
Protection. In a properly protected OS, it is tough for one processes or user to spy on another. But since they can be resident in the same context, it may still be possible. Virtualizing OSs divides them by another, even thinner channel and makes it that much harder for data to leak, and malicious things to be done.
Robustness. If you can swap OS contexts in and out you migrate them from one machine to anther and checkpoint and restart. Which allows for computers that detect failures on their own processors and recover gracefully.
These are the things (aside from millions of LOC of heavily debugged, mission critical code) that have kept people paying for Big Iron.

Related

How do computers switch between different processes (is it mainly an OS thing)?

Early computers, such as the ENIAC, had to have their program memory (a.k.a. instruction memory) changed manually in order for different programs to run. This would involve changing the tape or punchcard on which instructions were stored, so that every time a new program was to be run, the tape or punchcard had to be changed.
This limitation of early computers was in part due to the low informational density of rolls of tape compared to modern HDDs, but it was also partly due to the idea (and please correct me if I'm wrong) that each roll of tape was supposed to store only a single program.
In contrast, modern computers can switch between many different programs without having to physically replace ROM. It's easy to switch from one active window to another, or to start running a new program with a few mouse clicks. We now have HDDs and SSDs rather than punchcards and tape, so that we can simply have all the programs we want to run stored on a single SDD and a few HDDs which are connected to our computers all the time. And we never need to change our memory-storage devices until they break.
I hope the above is enough to motivate the following question:
What are some typical low-level features (w.r.t. hardware and/or software) that enable modern computers to switch between different processes or programs stored in ROM, as opposed to simply treating ROM as a container for single programs as did the computers of yore?
HDD and SSD's are not equivalent to ROM. There is a distinction between a program and a process. a process is a program in execution. Multiple programs are stored in HDD and SDD's. When a program is loaded into RAM it becomes a process. A loader which is part of OS does that. Multiple program can be loaded at the same time in RAM. Context switch is an OS function. Context switch requires not only movement of program or instructions but data as well. A lot of low level features are involved. It is not possible to list all of them. In sort, Yes it is mainly an OS thing.

Does laptop Embedded Controller have limited writes?

I am wondering if I should be worried about excessive writes to the embedded controller registers on my laptop. I am guessing that if they are true registers, they probably act more like RAM rather than flash memory so this isn't a problem.
However, I have a script to modify the registers in my laptop's EC to better control the fan speed curve. It has to be re-applied after each power change event such as sleep/wake as well as power cable events, so it happens fairly often. I just want to make sure I am not burning out my chips in the process.
The script I am using to write to the EC is located here:
https://github.com/RayfenWindspear/perl-acpi-fanspeed
Well, it seems you're writing to ACPI registers. Registers here do not refer to any specific hardware; it just means its a specific address that you can reach using a specific bus. It's however highly unlikely that something that you have to re-write after every power cycle is overwriting permanent storage, so for all practical aspects I'd assume that you can rely on this for as long as your laptop lives.
Hardware peripherals are almost universally implemented as SRAM cells. They will not wear out first. The fan you are controlling will have a limited number of start/stop cycles. So it is much more likely that the act of toggling these registers will wear something else out prematurely (than the SRAM type memory cell itself).
To your particular case, correctly driving a fan/motor can significantly improve it's life time. Over driving a fan/motor does not always make it go faster, but instead creates heat. The heat weakens the wiring and eventually the coils will short reducing drive and eventually wearing out. That said, the element being cooled can be damaged by excess heat so tuning things just to reduce sound may not make sense.
Background details
Generally, the element is called a Flip-Flop with various forms. SystemRDL is an example as well as SystemC and others where digital engineers will model these. In digital hardware, the flip-flops have default or reset values. This is fixed like ROM on each chip and is not normally re-programmable, uses EEPROM technologyNote1 or is often configured via input lines which the hardware designer can pull them high/low with a resistor or connect them to another elements 'GPIO'.
It is analogous to 'initdata'. Program values that aren't zero get copied from flash, disk, etc to memory at program startup. So the flip-flops normally do not hold state over a power cycle; something else does this.
The 'Flash' technology is based off of a floating gate and uses 'quantum tunnelling' to program the floating gates. This process is slightly destructive. It was invented by Fowler and Nordheim in 1967, but wide spread electronics industry did not start to produce them until the early 90s with NOR flash followed by NAND flash and many variants. But the underlying physics is the same; just the digital connections are different. So as well as this defect you are concerned about, the flash technology actually followed many hardware chips such as 68k, i386, etc. So 'flip-flops' were well established and typically the 'register' part of the logic is not that great of a typical chip and a flip-flop uses the same logic (gates) as the rest of the chip logic. Meaning that using flash would be an extra overhead with little benefit.
Some additional take-away is that the startup up and shutdown of chips is usually the most destructive time. Often poor hardware designers do not put proper voltage supervision and some lines maybe floating with the expectation that system programs will set them immediately. Reset events, ESD, over heating, etc will all be more harmful than just the act of writing a peripheral register.
Note 1: EEPROM typically has 100,000+ cycles. These features are typically only used once at manufacture time to set a chip configuration for the system. These are actually quite rare, but possible.
The MLC (multi-level) NAND flash in SSD has pathetically low cycles like 8,000 in some cases. The SLC (single level) old school flash have 10,000+ cycles, but people demand large data formats.

Challenges in using flat memory model

The flat memory model(linear memory model) provides maximum execution speed, occupies minimum CPU real estate and has direct access to memory without any segmentation / paging. It seems that flat memory model is ideal for small realtime application or single threaded realtime application.
However, is it possible to use real-time application that is multi-threaded/multi-tasking along with requirement of high resource allocation/protection in flat memory model ?
Thanks
I don't think the memory model has much to do here, except for the (RT)OS itself which you use to get multi-threading / multi-tasking done.
Paging or segmentation, if provided, is useful for the OS primarily for implementing memory protection features. It is only possible this way that the OS may protect itself and running user mode tasks against improperly written code in others which would accidentally write in memory out of their intended domain. (You can't get memory protection without some kind of paging or segmentation since you can't guard every single memory access)
In 32 bit AVR processors there is even a distinction between Memory management unit (MMU) and Memory protection unit (MPU). The first is the more complex unit supporting those kinds of paging features like modern PC processors (for example even making it possible to realize virtual memory), while the latter is a simpler subset only giving you tools for realizing memory protection (for example by the OS, to protect itself and tasks against each other), while it does not have any remapping capability (by a given address you always access the same cell of memory) like the MMU does. (Why the distinction? Because some cheaper AVR32's, where that's sufficient, only have an MPU)
So on a simple flat memory model what important thing you won't get are the protection features. If you can get by without those, it should go just fine.

Is it true that Windows 7 does not support real time serial communication?

A colleague of mine is getting a quote from a software developer that involves serial communication, and in their quote, the developer makes the the following statement:
...Windows 7 operating system, which uses a non-real-time serial communication setup.
Is it true that Windows 7 does not support real-time serial communication? To clarify on what it is meant by "real-time," the project deals with robot automation and any delays in communication (such as from buffering) could cause damage to the product, or stop the production line. I can not find any evidence to either support or deny this claim. I don't believe it to be true though, and I think it probably has more to do with them using VB.Net for development.
The 'real-time' term used here does not actually refer to anything in the serial communications bus.
However, it does have to do with the fact that the windows multitasking scheduler is not designed to allow for realtime tasks which have hard deadlines.
See this question for some info Why is Windows not considered suitable for real time systems/high performance servers?
Lets pretend you have a particle accelerator hooked up to your computer and you have to ensure that every 10 microseconds the magnet train switches to power the next set of cells but windows decides that it's time to apply some Windows Update patches. Your photon stream wouldn't get redirected properly and could cause damage to the system.
It is a fairly nonsensical statement, Windows itself is not a real-time operating system. It cannot provide hard guarantees that user mode code is going to respond quick enough. Other than thread scheduling delays, a simple mishap like getting the pages of the process swapped to the paging file is enough to cause arbitrary delays in getting it running again. An attribute of any demand-paged virtual memory operating system. So of course a "serial communications setup" cannot be either, assuming you are not contemplating writing ring 0 kernel code. Nobody does.
It is not a practical problem, the only point of using a serial port is to talk to the controller for the robot. Which provides the real-time guarantee.
You could only get in trouble when you command the robot to make an unrestricted move and use an external sensor to get it to stop. Not uncommon when you need to find an object whose location you don't know. A decent controller knows how to do that, avoid implementing it in your Windows code. Solid overtravel protection built into the robot itself that triggers an e-stop is necessary anyway, you can't trust that sensor either.
No, Windows 7 (and in fact all of the mainstream Windows releases) are not Real-time operating systems. To clarify what is meant by a real-time operating system:
A real-time operating system (RTOS) is an operating system (OS)
intended to serve real-time application requests. It must be able to
process data as it comes in, typically without buffering delays.
Processing time requirements (including any OS delay) are measured in
tenths of seconds or shorter.
A key characteristic of an RTOS is the level of its consistency
concerning the amount of time it takes to accept and complete an
application's task; the variability is jitter.[1] A hard real-time
operating system has less jitter than a soft real-time operating
system. The chief design goal is not high throughput, but rather a
guarantee of a soft or hard performance category. An RTOS that can
usually or generally meet a deadline is a soft real-time OS, but if it
can meet a deadline deterministically it is a hard real-time OS. [2]
An RTOS has an advanced algorithm for scheduling. Scheduler
flexibility enables a wider, computer-system orchestration of process
priorities, but a real-time OS is more frequently dedicated to a
narrow set of applications. Key factors in a real-time OS are minimal
interrupt latency and minimal thread switching latency; a real-time OS
is valued more for how quickly or how predictably it can respond than
for the amount of work it can perform in a given period of time.[3]
Note that most of the time real-time operating systems are less efficient (i.e. have lower throughput), which is why none of the mainstream operating systems are real-time (e.g. real-time editions of Linux use completely different kernels) - its only worth it in cases where timing at a very precise level is absolutely critical.
Windows CE is a real-time operating system Real-Time Systems with Microsoft Windows CE 2.1

Why is virtual memory needed in embedded systems? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Per my understanding, virtual memory is as follows:
Programs/applications/executables reside in a storage device. Storage device access is much slower than RAM. Hence, programs is copied from storage memory to main memory for execution. Since computers have limited main memory (RAM), when all of the RAM is being used (e.g., if there are many programs open simultaneously or if one very large program is in use), a computer with virtual memory enabled will swap data to the HDD and back to memory as needed, thus, in effect, increasing the total system memory.
As far as I know, most embedded devices do not have disk memory (like smartphones or in car infotainment systems). Code is directly executed from Flash memory. RAM is mainly used as a scratchpad area (local variables, return address etc).
So why do we need virtual memory in embedded systems? (e.g. WinCE and QNX support virtual memory)
Your understanding is completely wrong. You are confusing virtual memory with swapping or page files. There are systems that have virtual memory and no swap or page files and there are systems that swap without virtual memory.
Virtual memory just means that a process has a view of memory that is different from the physical mapping. Among other things, it allows processes to have their own virtual address space.
Storage device access is much slower than RAM. Hence programs is copied from storage memory to main memory for execution. Since computers have limited main memory (RAM), when all of the RAM is being used (e.g., if there are many programs open simultaneously or if one very large program is in use), a computer with virtual memory enabled will swap data to the HDD and back to memory as needed, thus, in effect, increasing the total system memory.
That's swapping (or paging). It has nothing to do with virtual memory except that most modern operating systems implement swapping using virtual memory. Swapping actually existed before virtual memory.
I think you're probably incorrect about these devices running code directly from flash memory. The read speed of flash is pretty low and RAM is very cheap. My bet is that most of the systems you mention don't run code directly from flash and instead use virtual memory to fault code into RAM as needed.
embedded systems, the term itself has a wide range of applications. you could call a small microcontroller with flash program space measured in kbytes or less and ram measured in either bits or bytes (not enough to be kbytes) an embedded system. Likewise a tivo running a full blown operating system on a pretty much full blown computer motherboard (replace tivo with xbox as another example) as an embedded system. So you need to be less vague about your question. virtual memory has little to do with any of that its applications cross those boundaries.
There are many answers above, David S has the best of course that virtual memory simply means the memory address on one side of the virtual memory boundary is different than the physical address that is used on the other side of that boundary. Where, how, why, etc is there a boundary varies.
A popular use for virtual memory, and I might argue a primary use case is for operating systems. One benefit is that for example all applications could be compiled for the same address space, all applications might be compiled such that from the programs perspective they all start at say address 0x8000, and as far as that program when it runs and accesses memory it accesses stuff based on that address. A combination of the hardware and the operating system change that virtual address that the program is using to a physical address. If the operating system allows for multitasking, then each task might think they are in the same address space but the physical addresses are different for each of those tasks. I wont elaborate further on why using an assumed, fixed address space, is a benefit. Another aspect that operating systems use is memory management. Many MMU's will let you segment the memory however. If a user wants to allocate 100 Megabytes of memory the program may access in its virtual address space that 100 meg as if it were linear and in that address space it is linear, but that 100 meg might be broken down into say 4Kbyte chunks that are scattered all about the physical address space, not always likely but certainly technically possible that no two chunks of that physical memory is next to any other chunk of that 100 meg. your memory management doesnt necessarily have to try to keep large physical chunks of memory available for applications to allocate. Note not all MMUs are exactly the same and 4Kbytes is just an example. A third major benefit from virtual address space to an operating system is protection. If the application is bound to the virtual address space, it is often quite easy to prevent that application from touching the memory of any other application or the operating system. the application in this case would operate/execute at a proection level such that all accesses are considered virtual and have to go through a translation to physical, the tables that are used to define that virtual to physical can contain protection flags. If the application addresses a memory address in its virtual space that it has no business accessing, the hardware can trap that and let the operating system take action as to how to handle it (virtualize some hardware, pop up an error and kill the app, pop up a warning and not kill the app but at the same time feed the app bogus data for their transaction, etc).
There are lots of ways this can be used in an embedded system. first off many embedded systems run operating systems, so all of the above, ease of compiling the program for the address space, relative ease of memory management, and protection of the other applications and operating system and other benefits not mentioned. (virtualization being one, being able to enable/disable instruction/data caching on a block by block basis is another)
The bottom line though is what David S pointed out. virtual memory simply means the virtual address is not necessarily equal to the physical address, it can be but doesnt have to be, there is some boundary, some hardware, usually table driven, that translates the virtual address into a physical address. Lots of reasons why you would want to do this, since some embedded systems are indistinguishable from non-embedded systems any reason that applies to a non-embedded system can apply to an embedded system.
As much as folks may want you to believe that a system has a flat address space, it is often an illusion. In a microcontroller for example you might have multiple flash banks and one or more ram banks. Each of these banks has a physical, generally zero based address. Even if there is no mmu or anything else like that there is a place somewhere between the address bus on the processor and the address bus on the flash or ram memory that decodes the address on the processor and uses that to address into the specific memory bank. Often the lower bits match and upper bits are responsible for the bank choices (this is often the case with an mmu as well) so in that sense the processor is living in a virtual address space. (not limited to microcontrollers, this is generally how processors address busses are treated) With microcontrollers depending on a pin being pulled high or low or some other mechanism you might have a chip feature that allows one flash bank to be used to boot the processor or another. You might tie an input pin high and the processors built in bootloader allows you to access and debug the system for example reprogram the application flash. Or perhaps tie that line low and boot the application flash instead of the vendors debugger/boot flash. some chips get even more complicated letting you boot one flash then the program writes a register somewhere instantly changing the memory architecture moving things around, for example allowing ram to be used for the interrupt vector table so your application can be changed after boot rather than a vector table in flash that is not as easy to change at will.
now when you talk about virtual memory as far as swapping to and from a disk, that is a trick often employed by operating systems to give the illusion of having more ram. I mentioned that above under the category of virtualization. virtual memory in the sense that it isnt really there, I have X bytes but will let the software think there are Y bytes (where Y is larger than X) available. The operating system through the virtual tables used by the hardware, manages which memory chunks are tied to physical ram and are allowed to complete as is by the hardware, or are marked as not available in some way, causing an exception to the operating system, upon inspection the operating system determines that this is a valid address for this application, but the data behind this address has been swapped to disk. The operating system then finds through some algorithm another chunk of ram belonging to whomever (part of the algorithm) and it copies that chunk of ram to disk, marks the table related to that virtual to physical as not valid, then copies the desired chunk from disk to ram, marks that chunk as valid and lets the hardware complete the memory cycle.
Not any different than say how vmware or other virtual machines work. You can execute instructions natively on the hardware using virtual memory until such time as you cause an exception, the virtual machine might think you have an xyz network interface and might have a driver that is accessing a register in that xyz network interface, but the reality is you have no xyz hardware and/or you dont want the virtual machine applications to access that hardware, so you virtualize it, you trap that register access, and using software that simulates the hardware you fake that access and let the program on the virtual machine continue. This obviously not the only way to do virtual machines, but it is one way if the hardware supports it, to let a virtual machine run very fast as a percentage of the time it is actually running instructions on the hardware. The slowest way to virtualize of course is to virtualize everything including the processor, every instruction in that case would be simulated, this is quite slow but has its own features (virtualizing an arm system on an x86 or x86 on an arm, xyz on an abc, fill in the blanks). And if that is the type of virtual memory you are talking about in an embedded system, well if the embedded system is for the most part indistinguishable from a non-embedded system (an xbox or tivo for example) then well for the same reasons you could allow such a thing. If you were on a microcontroller, well the use cases there would generally mean if you needed more memory you would buy a bigger microcontroller, or add more memory to the system ,or change the needs of the application such that it doesnt need as much memory. there may be exceptions, but it mostly depends on your application and requirements, a general purpose or general purpose like system which allows for applications or their data to be larger than the available ram, will require some sort of solution. the microcontroller in your keyless entry key fob thing or in your tv remote control or clock radio or whatever normally would not have a need to allow "applications" to require more resources than are physically there.
The more important benefit of using virtual memory is that every process gets its own address space which is isolated from every other process's. That way virtual memory helps keep faults contained and improves security and stability. I should note that it is still possible for two processes to share a bit of memory, to facilitate communication (shared mem IPC).
Also you can do other tricks like conserving memory via mapping shared parts into more than one process's (libc comes to mind for embedded use) address space but only having it once in physical mem. Also this gives it a speed boost, you can even enhance it further the way linux does cheapen fork/clone by only copying the in kernel descriptors and leaving the memory image alone up until the first write access is done with a similar idea.
As a last benefit, in modern systems, it's common to do file I/O via mapping the file into the process space (cf. mmap for example).
It's interesting to note that one can get some of the benefits of "virtual memory" without needing a full-fledged MMU. The hardware requirements can sometimes be amazingly light. The PIC 16C505 has a 5-bit address space and 40 bytes of RAM; addresses 0x10 to 0x1F can map to either of two groups of 16 bytes of RAM. When writing an application which needed to manage two different data streams, I arranged so that all the variables associated with one data stream would be in the first group of 16 "switchable" memory locations, and those associated with the other would be at the corresponding addresses in the second group. I could then use the same code to manage both data streams. Simply set the banking bit one way, call the routine, set it the other way, and call the routine again.
One of the reasons Virtual Memory exists is so that your device can multitask. It can also act as your RAM does, thus taking the load off of your physical RAM and swapping the load back and forth.