Is the process type of a computer operating system fixed?User level and kernel level? - process

Recently, while learning the operating system, I suddenly thought that the process created by fork () is a kernel level process or a user level process. I asked the teacher. The teacher said that the new version of Linux is a kernel level process. I suddenly had this idea in my mind. Is the process type of each computer fixed? I look forward to your reply.

Related

Is there such thing as "main" OS in case of type 1 hypervisor?

When we work with type 2 hypervisors it is very easy to say which OS is the main one. For example, if you install some type 2 hypervisor on Win 7, and launch Win 95 inside this hypervisor, the main OS will be Win 7. The conception is obvious.
However, it's not so obvious with type 1 hypervisors. I never worked with them before.
You have few operating systems on top the hypervisor. So... Which one of these OSs will be the main one? How this question is resolved? And probably (just a guess) there is no such thing as "main OS" in this case?
I don't think that "main" operating system is a defined term.
A type 2 hypervisor is an extension to an operating system, which is known as the host operating system when guest operating systems are running on top of it. A host operating system runs directly on the hardware and needs to have specific code to interact with the hardware (e.g. the NIC, the disk, etc.) and provide abstractions to user-level programs. The hypervisor simply extends the functionality of the host operating system to allow guest operating systems to run on top (e.g. when the guest operating system wants to write to the hard drive, the hypervisor translates this request to a form that the host OS can understand so that the host OS can make the disk access).
A type 1 hypervisor runs directly on the hardware without an operating system. A type 1 hypervisor is basically just a stripped down operating system with the functionality necessary to allow guest operating systems to run on top. When the guest needs to write to disk or do some other privileged operation, the type 1 hypervisor receives the request and acts on it. Perhaps the type 1 hypervisor is what you would consider the "main" OS? Regardless, I would avoid using that term.
I would argue that the "main" OS would be the Hypervisor software itself, as it runs directly on the hardware and supports the virtual operating systems, as well as boots on system startup.

Difference between "process virtual machine" with "system virtual machine"

What's the difference between process virtual machine with system virtual machine?
My guess is that process VM is not providing a kind of an operating system for the whole application for that OS, rather providing an environment for some specific application.
And system VM is providing an environment for an OS to be installed just like VirtualBox.
Am I getting it correct?
Another question is the difference between the two different implementation of system VM: hosted vs. stand-alone.
I'm a beginner studying OS, so easy and understandable answer would be greatly appreciated :)
A Process virtual machine, sometimes called an application virtual machine, runs as a normal application inside a host OS and supports a single process. It is created when that process is started and destroyed when it exits. Its purpose is to provide a platform-independent programming environment that abstracts away details of the underlying hardware or operating system, and allows a program to execute in the same way on any platform.
A System virtual machine provides a complete system platform which supports the execution of a complete operating system (OS),Just like you said VirtualBox is one example.
A Host virtual machine is the server component of a virtual machine , which provides computing resources in the underlying hardware to support guest virtual machine (guest VM).
The following is from http://airccse.org/journal/jcsit/5113ijcsit11.pdf :
System Virtual Machines
A System Virtual Machine gives a complete virtual hardware platform with support for execution
of a complete operating system (OS).
The advantage of using System VM are:
Multiple Operating System environments can run in parallel on the same piece of
hardware in strong isolation from each other.
The VM can provide an instruction set architecture (ISA) that is slightly different from
that of the real machine
The main draw backs are:
Since the VM indirectly accesses the same hardware the efficiency is compromised.
Multiply VMs running in parallel on the same physical machine may result in varied
performance depending on the workload imposed on the system. Implementing proper
isolation techniques may address this drawback.

System Virtualization : Understanding IO virtualization and role of hypervisor [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I would like to obtain a correct understanding of I/O virtualization. The context is pure/full virtualization and not para-virtualization.
My understanding is that a hypervisor virtualizes hardware and offers virtual resources to each sandboxed application. Each sandbox thinks its accessing the underlying hardware, but in reality it is not. Instead it is the hypervisor which does all the accesses. It is this aspect I need to understand better.
Let assume a chip has a hardware timer meant to be used by OS kernel as a tick timer. Lets assume that there are 2 virtual machines (E.g Windows and Linux) running atop the hypervisor.
None of the virtual machines have modified their source code. So they continue to spit out instructions that directly program the timer resource.
What is the role of the hypervisor really here? How are the two OSes really prevented from accessing the real stuff?
After a bit of reading, I have reached a certain level of understanding described at:
https://stackoverflow.com/a/13045437/1163200
I reproduce it wholly here:
This is an attempt to answer my own question.
System Virtualization : Understanding IO virtualization and role of hypervisor
Virtualization
Virtualization as a concept enables multiple/diverse applications to co-exist on the same underlying hardware without being aware of each other.
As an example, full blown operating systems such as Windows, Linux, Symbian etc along with their applications can coexist on the same platform. All computing resources are virtualized.
What this means is none of the aforesaid machines have access to physical resources. The only entity having access to physical resources is a program known as Virtual Machine Monitor (aka Hypervisor).
Now this is important. Please read and re-read carefully.
The hypervisor provides a virtualized environment to each of the machines above. Since these machines access NOT the physical hardware BUT virtualized hardware, they are known as Virtual Machines.
As an example, the Windows kernel may want to start a physical timer (System Resource). Assume that ther timer is memory mapped IO. The Windows kernel issues a series of Load/Store instructions on the Timer addresses. In a Non-Virtualized environment, these Load/Store would have resulted in programming of the timer hardware.
However in a virtualized environment, these Load/Store based accesses of physical resources will result in a trap/Fault. The trap is handled by the hypervisor. The Hypervisor knows that windows tried to program timer. The hypervisor maintains Timer data structures for each of the virtual machines. In this case, the hypervisor updates the timer data structure which it has created for Windows. It then programs the real timer. Any interrupt generated by the timer is handled by the hypervisor first. Data structures of virtual machines are updated and the latter's interrupt service routines are called.
To cut a long story short, Windows did everything that it would have done in a Non-Virtualized environment. In this case, its actions resulted in NOT the real system resource being updated, but virtual resources (The data structures above) getting updated.
Thus all virtual machines think they are accessing the underlying hardware; In reality unknown to them, all accesses to physical hardware is mediated through by the hypervisor.
Everything described above is full/classic virtualization. Most modern CPUs are unfit for classic virtualization. The trap/fault does not apply to all instructions. So the hypervisor is easily bypassed on modern devices.
Here is where para-virtualization comes into being. The sensitive instructions in the source code of virtual machines are replaced by a call to Hypervisor. The load/store snippet above may be replaced by a call such as
Hypervisor_Service(Timer Start, Windows, 10ms);
EMULATION
Emulation is a topic related to virtualization. Imagine a scenario where a program originally compiled for ARM is made to run on ATMEL CPU. The ATMEL CPU runs an Emulator program which interprets each ARM instruction and emulates necessary actions on ATMEL platform. Thus the Emulator provides a virtualized environment.
In this case, virtualization of system resources is NOT performed via trap and execute model.

Getting started with writing MPI programs

In this coming semester, I am starting some research on large-scale distributed computing with MPI. What I am looking for help with is the initial stages, specifically getting a solid development environment set up. Does anyone have any recommendations for good tools to use for this?
I am also curious as to whether there exists a kind of simulator that would allow be to write MPI and distribute it to virtual (rather than physical) nodes.
You could download a MPI library such as Open-MPI, MPICH, etc. and run it on a multi-core system (such as a recent desktop) with number of processes = number of cores. They would operate without a network interconnect (for instance, over shared memory). That should be enough to explore initially.
If you really want multiple nodes, you can experiment with multiple VMs with a VM network before actually moving on to a physical cluster. One of the VMs would have to be configured to act like a NFS server and the rest of the VMs could mount your home directories over NFS.
Depends on what is your favourite language. I dove into MPI using python and the pypar module. It lets you concentrate on MPI procedures without worrying too much about pointers and complicated c / c++ stuff. MPI on a single machine is programmed no differently from MPI on 100s. Getting cross machine setups is more about what MPI implementation and operating systems you use.

usb target disk mode equivalent on running system

Is there anyway that you can expose local partition or disk image through your computer usb to another computer to appear like external drive on mac/linux/bsd system ?
I'm trying to play with something like kernel development and I need one system for compiling and other for restarting/testing.
With USB: Not a chance. USB is unidirectional, and your development system has no way of emulating a mass storage device, or any kind of other USB device.
With Firewire: Theoretically. (This is what Apple's target disk mode is using.) However, I can't find a readily available solution for that.
I'd advice you to try either virtualization or network boot. VirtualBox is free and open software, and has a variety of command line options, which means it can be scripted. Network boot takes a little effort to set up, but can work really well.
Yet another option, is to use a minimal Linux distribution as a bootstrap which sets up the environment you want, and then uses kexec to launch your kernel, possibly with GRUB as an intermediary step.
What kind of kernel are you fiddling with? If it's your own code, will the kernel operate in real or protected mode? Do you strictly need disk access, or do you just want to boot the actual kernel?