What happens to the PCB of processes that have finished their execution? - process

Are they removed from memory or they are maintained in memory? If they are in memory, what is the purpose?

The PCB (process context block) holds the register state for the process (thread). It is one of many data structures an operating system maintains for a process. The PCB serves no purpose once the process is terminated so it is typically freed or reused by another process (or thread). Most other data structures are are freed or reused as well.

Related

Where Does PCB of a process lies inside the main memory

Since Process in main memory is stored in the form of stack, heap, data section and static data section. Where does the PCB lies in here? Or is it stored independent from this?
Is it stored on the bottom of the stack of the process or is it independent of the memory representation of process
PCB (Process Control Blocks) which contains all the info about a process are usually stored in a specially reserved memory as part of the kernel space.
The kernel space, part of logical space in RAM, is the core of an Operating System which has full access to the underlying hardware.

Understanding postgreSQL shared memory

I've watched the presentation and still have one question about working of shared buffers. As the slide 16 shows, when the server handles an incoming request, the postmaster process calls fork() to create a child one for handling the incoming request. Here is a picture from there:
So, we have the entire copy of the postmaster process except its pid. Now, if the child process update some data belonging to shared memory (putting in shared buffers, as shown in the slide 17), we need the other threads be awared of the changes. The picture:
The synchronization process is what I don't understand. Any process owns a copy of the shared memory and while copying it doesn't know if another thread will write something to its copy of the shared memory. What if after creating proc1 by calling fork(), another process proc2 is created a little bit later and start writing something into the its copy of the shared memory.
Question: How does proc1 know what to do with the part of the shared memory that are being modified by proc2?
The crucial thing to understand is that there are two different types of memory sharing used.
One is the copy-on-write sharing used by fork() (without exec()), where the child process inherits the parent process's memory and state. In this case when the child or parent modify anything, a new private copy of the modified memory page is allocated. So the child doesn't see changes made by the parent after fork() and the parent doesn't see changes made by the child after fork(). Peer children cannot see each other's changes either. They're all isolated as far as memory is concerned, they just share a common ancestor.
That memory is what's shown in the Program (text), data and stack sections of the diagram.
Because of that isoltion, PostgreSQL also uses POSIX shared memory - or, in older versions, system V shared memory. These are explicitly shared memory segments that are mapped to a range of addresses. Each process sees the same memory, and it is not copy-on-write. It's fully read/write shared.
This is what is shown in the purple "shared memory" section of the diagram.
POSIX shared memory is used for inter-process communication for locking, for shared_buffers, etc etc. Not the memory inherited from fork()ing.
While memory from fork is often shared copy-on-write, that's really an operating system implementation detail. The operating system could choose not to share it at all, and make an immediate copy of the parent's whole address space for the child at fork time. The only way the copy-on-write sharing is really relevant is when looking at top etc.
When PostgreSQL refers to "shared memory" it's always talking about the POSIX or System V shared memory block(s) that are mapped into each process's address space. Not copy-on-write sharing from fork().
I don't know about this special case but generally in linux and most other operating systems in order to speedup creating a new process, when a process asks operating system to create a new process then OS creates the new one with minimum requirements (specifically in DB applications) and share most of parent memory space with child. Now when a child want to modify some part of shared memory, OS uses COW (copy on write) concept and create a new copy of that part of the memory for child process usage. So this part becomes specific for child process and is no longer shared with parent process.

what is copy-on-write memory

As I continuously write data to redis, the memory used by copy-on-write keeps increasing. Even though I write my program to sleep long enough so that redis will be able to finish all the background save (last memory message is 0 MB of memory used by copy-on-write), the next background save will go back to the high number.
Example,
1300MB of memory used by cow
1400MB of memory used by cow
0MB of memory used by cow
1500MB of memory used by cow
What exactly do all these means? As far as I know, if the copy-on-write memory keeps increasing, there is no way there is enough ram. Also, with each background save that is of high memory used, redis seems non-functional. Jedis always hit the socket timeout exception.
Here I will explain a few things: what Copy-on-Write (CoW) is and how it consumes the memory, why setting 'vm.overcommit_memory = 1' won't help the memory usage and performance issue, and best practices of backing up Redis data.
Copy-on-Write and its memory usage
Redis' snapshot backup leverage the CoW semantics, which is provided by modern operating system to resolve the issue that when forking processes, the memory of the parent process is copied to the child process thus doubles the memory footprint. In CoW, the forked child process will share the original memory space of the parent process. It only copies the memory page when either process modifies that memory page. Here is an illustration of the memory space before data modification and after data modification:
When the Redis' RDB backup is on-going, there will be data changes happening in the parent process, which is accepting new requests from clients and handling it in the memory. If the QPS is high, the parent process will copy tons of memory pages for the new changes during the child process' backup time. Thus the parent process will consume extra memory. In extreme cases, if all of the memory pages are modified, the memory footprint of the Redis instance will be doubled. Yeah, there is a possibility that the memory is doubled, and this fact will explain why Redis provides the "overcommit_memory=1" option, and what problem it can resolve, what it cannot (reducing the memory usage).
What "vm.overcommit_memory = 1" is, and what issues it resolves
During the RDB backup, you may see such log error:
10202:M 13 Sep 11:34:16.535 # Can't save in background: fork: Cannot allocate memory
It indicates there is not enough memory to fork the child process to do the backup. If the Redis process consumes 2GB memory now, when forking the child process, operating system will assume you have ANOTHER 2GB memory, so that in extreme cases of CoW, there is sufficient memory to copy all dirty memory pages. Even the extra memory is not used yet when forking the child process, it checks the idle memory to avoid later out-of-memory errors. In the Redis log, it provides the solution:
10202:M 13 Sep 11:33:09.943 # WARNING overcommit_memory is set to 0! Background
save may fail under low memory condition. To fix this issue add
'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the
command 'sysctl vm.overcommit_memory=1' for this to take effect.
So setting 'vm.overcommit_memory = 1' will allow you to fork the child process when the idle memory is low. If you know the dirty memory pages during the backup process won't be too many, there won't be any actual problems because the memory will be allocated successfully every time a new CoW operation happens.
And, 'vm.overcommit_memory = 1' only guarantees that you can fork the child process to backup the Redis data, but it cannot reduce the memory usage if there are writing operations happening all the time in the parent process.
Redis backup practice
There are three ways of persisting the Redis memory data: RDB(snapshotting), AOF, and the hybrid of the two. Any approach will impact the server response time to some extent no matter how you config the settings. To minimize the impact of the persisting process, we normally run the backup in slave instance instead on the master instance. However, there is a new risk if we do it on a slave. When there is network partitions happening, the slave may not be able to keep up-to-date, so backing up on a slave will risk losing some data. One resolution is to have multiple slaves, so the chance of having all of them out-of-sync with the master instance is lowered. Another prevention is setting up robust monitoring system, so we can detect network issues sooner and reduce the time span of the network partition.
From the Redis FAQ:
Redis background saving schema relies on the copy-on-write semantic of the fork in modern operating systems: Redis forks (creates a child process) that is an exact copy of the parent. The child process dumps the DB on disk and finally exits. In theory, the child should use as much memory as the parent being a copy, but actually thanks to the copy-on-write semantic implemented by most modern operating systems the parent and child process will share the common memory pages. A page will be duplicated only when it changes in the child or in the parent. Since in theory, all the pages may change while the child process is saving.
The increased memory usage during the save process is dependent on the number of writes performed while the dump is undergoing because of the copy-on-write (COW) mechanism.
What you could do instead is, configure a Redis slave and delegate the task of persistence to it.

What is easier for OS to set up, a new process or a new thread?

Question as stated above .. from the stand point of Operating System, which one is easier to create, a thread or a process?
A new thread should be faster to create than a new process.
A process is a heavy weight system structure. It has it's own virtual memory space, owns all handles (mutexes, semaphores, open files), and has protection from other processes. Cross-process communication has to go through the OS.
A thread is a "child" to a process. A thread is simply an execution context (registers, stack, and thread-local state) that can run on another hardware core or be co-scheduled on the same core as other threads within a process. Multiple threads share the resources of a single process including the address space and OS handles owned by the process.
There are structures even faster than dynamically creating threads for achieving multitasking during a programs runtime.
Some systems or code libraries support have thread pools (light-weight threads). In this case, you tell the system how many threads you want to run and it creates them up front. Then instead of creating and destroying threads (which is still a relatively slow process), you can allocate and free threads from this pool.
Job Tasking is another similar lighter weight multicore structure where you have several threads with a job queues of tasks to execute. They run the tasks in their job queues and then sleep when the queues are empty.
For both thread pools and job tasking, there is no need for thread startup / shutdown cost aside from upon creation and destruction of the global pools and queues.
Well traditionally threads are called "lightweight processes" so I guess they are easier to set up.
IIRC in Linux both forking and starting a new thread (clone(2)) are implemented deep down with a call to the same function (do_fork) and the set-up times are really comparable for decent numbers. For large numbers of forks / clones (think thousands) they start to add up.
In TLPI there is a nice comparison:
Forking 100,000 times: 22.27 seconds
Cloning 100,000 times: 2.97 seconds
In particular a really nice feature of clone is that the speed remains constant even if the size of the process cloned grows.
The real advantage of threads lies in that they don't need IPC.
A new thread is easier to create, since when a new process is created, it requires more setup than a thread, e.g. a security context, an inheritable handle, a current directory, etc.
The major difference between threads and processes is
1.Threads share the address space of the process that
created it; processes have their own address.
2.Threads have direct access to the data segment of its
process; processes have their own copy of the data segment
of the parent process.
3.Threads can directly communicate with other threads of
its process; processes must use interprocess communication
to communicate with sibling processes.
4.Threads have almost no overhead; processes have
considerable overhead.
5.New threads are easily created; new processes require
duplication of the parent process.
6.Threads can exercise considerable control over threads of
the same process; processes can only exercise control over
child processes.
7.Changes to the main thread (cancellation, priority
change, etc.) may affect the behavior of the other threads
of the process; changes to the parent process does not
affect child processes.
A thread just has to be as easy or easier to create than a process since a creating a process implies creating at least one thread to run the process code.
Rgds,
Martin

What is the difference between a thread/process/task?

What is the difference between a thread/process/task?
Process:
A process is an instance of a computer program that is being executed.
It contains the program code and its current activity.
Depending on the operating system (OS), a process may be made up of multiple threads of execution that execute instructions concurrently.
Process-based multitasking enables you to run the Java compiler at the same time that you are using a text editor.
In employing multiple processes with a single CPU,context switching between various memory context is used.
Each process has a complete set of its own variables.
Thread:
A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of registers.
A thread of execution results from a fork of a computer program into two or more concurrently running tasks.
The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is contained inside a process. Multiple threads can exist within the same process and share resources such as memory, while different processes do not share these resources.
Example of threads in same process is automatic spell check and automatic saving of a file while writing.
Threads are basically processes that run in the same memory context.
Threads may share the same data while execution.
Thread Diagram i.e. single thread vs multiple threads
Task:
A task is a set of program instructions that are loaded in memory.
Short answer:
A thread is a scheduling concept, it's what the CPU actually 'runs' (you don't run a process). A process needs at least one thread that the CPU/OS executes.
A process is data organizational concept. Resources (e.g. memory for holding state, allowed address space, etc) are allocated for a process.
To explain on simpler terms
Process: process is the set of instruction as code which operates on related data and process has its own various state, sleeping, running, stopped etc. when program gets loaded into memory it becomes process. Each process has atleast one thread when CPU is allocated called sigled threaded program.
Thread: thread is a portion of the process. more than one thread can exist as part of process. Thread has its own program area and memory area. Multiple threads inside one process can not access each other data. Process has to handle sycnhronization of threads to achieve the desirable behaviour.
Task: Task is not widely concept used worldwide. when program instruction is loaded into memory people do call as process or task. Task and Process are synonyms nowadays.
A process invokes or initiates a program. It is an instance of a program that can be multiple and running the same application. A thread is the smallest unit of execution that lies within the process. A process can have multiple threads running. An execution of thread results in a task. Hence, in a multithreading environment, multithreading takes place.
A program in execution is known as process. A program can have any number of processes. Every process has its own address space.
Threads uses address spaces of the process. The difference between a thread and a process is, when the CPU switches from one process to another the current information needs to be saved in Process Descriptor and load the information of a new process. Switching from one thread to another is simple.
A task is simply a set of instructions loaded into the memory. Threads can themselves split themselves into two or more simultaneously running tasks.
for more Understanding refer the link: http://www.careerride.com/os-thread-process-and-task.aspx
Wikipedia sums it up quite nicely:
Threads compared with processes
Threads differ from traditional multitasking operating system processes in that:
processes are typically independent, while threads exist as
subsets of a process
processes carry considerable state information, whereas multiple
threads within a process share state
as well as memory and other resources
processes have separate address spaces, whereas threads share their
address space
processes interact only through system-provided inter-process
communication mechanisms.
Context switching between threads in the same process is
typically faster than context
switching between processes.
Systems like Windows NT and OS/2 are said to have "cheap" threads and "expensive" processes; in other operating systems there is not so great a difference except the cost of address space switch which implies a TLB flush.
Task and process are used synonymously.
from wiki clear explanation
1:1 (Kernel-level threading)
Threads created by the user are in 1-1 correspondence with schedulable entities in the kernel.[3] This is the simplest possible threading implementation. Win32 used this approach from the start. On Linux, the usual C library implements this approach (via the NPTL or older LinuxThreads). The same approach is used by Solaris, NetBSD and FreeBSD.
N:1 (User-level threading)
An N:1 model implies that all application-level threads map to a single kernel-level scheduled entity;[3] the kernel has no knowledge of the application threads. With this approach, context switching can be done very quickly and, in addition, it can be implemented even on simple kernels which do not support threading. One of the major drawbacks however is that it cannot benefit from the hardware acceleration on multi-threaded processors or multi-processor computers: there is never more than one thread being scheduled at the same time.[3] For example: If one of the threads needs to execute an I/O request, the whole process is blocked and the threading advantage cannot be utilized. The GNU Portable Threads uses User-level threading, as does State Threads.
M:N (Hybrid threading)
M:N maps some M number of application threads onto some N number of kernel entities,[3] or "virtual processors." This is a compromise between kernel-level ("1:1") and user-level ("N:1") threading. In general, "M:N" threading systems are more complex to implement than either kernel or user threads, because changes to both kernel and user-space code are required. In the M:N implementation, the threading library is responsible for scheduling user threads on the available schedulable entities; this makes context switching of threads very fast, as it avoids system calls. However, this increases complexity and the likelihood of priority inversion, as well as suboptimal scheduling without extensive (and expensive) coordination between the userland scheduler and the kernel scheduler.