Today I attended a lecture about linux processes. The teacher stated that:
after fork() returns, child process is ready to be executed
because of Copy On Write mechanism, fork-exec sequence is guaranteed to prevent unnecessary copying of parent's memory
By fork-exec sequence I mean something like that:
if(!fork())
{
exec(...);
}
i = 0;
Which, as far as I know translates into this (written in pseudo-asm):
call fork
jz next
call exec(...)
next:
load 0
store i
Let's assume that parent has been granted enough CPU time to execute all the lines above in one run.
fork returns 0, so line 3 is skipped
when 0 is stored in "i" child haven't yet exec'ed, so COW kicks in
copying (unnecessarily) parent's memory.
So how is unnecessary copying prevented in this case?
It looks like it isn't, but I think linux developers were smart enough to do it ;)
Possible answer: child always runs first (parent is preemted after calling fork())
1. Is that true?
2. If yes, does that guarantee prevention of unnecessary copying in all cases?
Basically two people can read the same book. But if one starts writing notes in the margin then the other person needs a copy of that page before that occurs. The person that has not written into the margin of the page does not want to see the other persons notes in the book.
The answer is essentially that necessary copying - of pages hosting any data which gets changed - happens, while unnecessary copying - of pages which have not been changed by either process since the fork - does not.
The latter would typically include not only unmodified data, but also those holding the program itself and shared libraries it has loaded - typically many pages that can be shared, vs. just a few which must be duplicated.
Once the child calls an exec function, the sharing (and any need for future copy-on-write) is terminated.
Related
I was just introduced to the idea of a process.
The book defines a process as "an instance of the running program".
I am still a little confused as to what this means. It seems to me that a process is a particular instruction that a program is running? Or not?
What is the difference between a function call and a process? For instance let us say we have a function called main, and within it we are calling the printf function. Does printf count as a separate process? Why/why not?
What makes something a child vs parent process? I know that one way to create child processes is by calling fork(). And then based on the integer value that fork returns, we can either be in the child vs in the parent process. But other than is there something that makes something a parent vs a child process?
Also based on the answer on question 2, would the printf count as a child process?
Talking strictly in terms of linux processes are "instances" of the programs as the book mentions. That means that they contain the information that your program needs to "execute".
The process doesn't mean the instruction that the program is running, it means the entire running program. The program you are referring to is I am assuming the code that you write, but that is just one aspect of the process. There are various other attributes like the stack memory space, heap memory space and process ID etc. and all these details are stored in a datastructure called process control block(PCB).
Suppose you have a compiled version of your code "Fibonacci.c" called fibonacci, if you run it from two different terminals it would spawn "two processes" of the same program.
Function calls are something that happen inside a process. printf would happen in the same function. It doesn't count as a separate process as it is executing inside the same entity.
fork can create child processes. As a rule of thumb I would say that any process that is created inside our current process would be a child process. Though this might not be a strict definition. What fork does is duplicate the current process, that means that it creates a new entry by creating a new PCB. It has the same code segment as the process that calls the fork but it will have its own memory space, process ID etc. I will not go deeper into how memory is handled when a fork occurs but you can read more about it in the man pages.
printf also is not a child process. It resides in the current process itself.
I was reading Linux Kernel development and trying to understand process address space semantics in case of fork(). While I'm reading in context of Kernel v2.6, and in newer versions, any of child or parent may run first, I am confused with following:
Back in do_fork(), if copy_process() returns successfully, the new child is woken up
and run. Deliberately, the kernel runs the child process first. In the common case of the
child simply calling exec() immediately, this eliminates any copy-on-write overhead
Based on my understanding of COW, if an exec () is used, COW will always happen, whether child or parent process runs first. Can someone explain how is COW eliminated in case of child running first? Does 'overhead' refer to an extra overhead that comes with COW instead of 'always copy' semantics?
fork() creates a copy of the parent's memory address space where all memory pages are initially shared between the parent and the child. All pages are markes as read-only, and on the first write to such a page, the page is copied so that parent and child have their own. (This is what COW is about.)
exec() throws away the entire current address space and creates a new one for the new program.
If the child executes first and calls exec(), the none of the shared pages needs to be unshared.
If the parent executes first and modifies some data, then these pages are unshared. If the child then starts executing and calls exec(), the copied pages will be thrown away, i.e., the unsharing was not actually necessary.
I have a routine which accepts an object and does some processing on it. The objects may or may-not be mutable.
void CommandProcessor(ICommand command) {
// do a lot of things
}
There is a probability that the same command instance loops back in the processor. Things turn nasty when that happens. I want to detect these return visitors and prevent them from being processed. question is how can I do that transparently i.e. without disturbing the object themselves.
here is what i tried
Added a property Boolean Visited {get, set} on the ICommand.
I dont like this because the logic of one module shows up in other. The ShutdownCommand is concerned with shutting down, not with the bookkeeping. Also an EatIceCreamCommand may always return False in a hope to get more. Some non-mutable objects have outright problems with a setter.
privately maintain a lookup table of all processed instances. when an object comes first check against the list.
I dont like this either. (1) performance. the lookup table grows large. we need to do liner search to match instances. (2) cant rely on hashcode. the object may forge a different hashcode from time to time. (3) keeping the objects in a list prevents them from being garbage collected.
I need a way to put some invisible marker on the instance (of ICommand) which only my code can see. currently i dont discriminate between the invocations. just pray the same instances dont come back. does anyone have a better idea to implement this functionality..?
Assuming you can't stop this from happening just logically (try to cut out the loop) I would go for a HashSet of commands that you've already seen.
Even if the objects are violating the contracts of HashCode and Equals (which I would view as a problem to start with) you can create your own IEqualityComparer<ICommand> which uses System.Runtime.CompilerServices.RuntimeHelpers.GetHashCode to call Object.GetHashCode non-virtually. The Equals method would just test for reference identity. So your pool would contain distinct instances without caring whether or how the commands override Equals and GetHashCode.
That just leaves the problem of accumulating garbage. Assuming you don't have the option of purging the pool periodically, you could use WeakReference<T> (or the non-generic WeakReference class for .NET 4) to avoid retaining objects. You would then find all "dead" weak references every so often to prevent even accumulating those. (Your comparer would actually be an IEqualityComparer<WeakReference<T>> in this case, comparing the targets of the weak references for identity.)
It's not particularly elegant, but I'd argue that's inherent in the design - you need processing a command to change state somewhere, and an immutable object can't change state by definition, so you need the state outside the command. A hash set seems a fairly reasonable approach for that, and hopefully I've made it clear how you can avoid all three of the problems you mentioned.
EDIT: One thing I hadn't considered is that using WeakReference<T> makes it hard to remove entries - when the original value is garbage collected, you're not going to be able to find its hash code any more. You may well need to just create a new HashSet with the still-alive entries. Or use your own LRU cache, as mentioned in comments.
I have two instances of NSManagedObjectContext: one is used in main thread and the other is used in a background thread (via an NSOperation.) For thread safety, these two contexts only share an NSPersistentStoreCoordinator.
The problem I'm having is with pending changes in the first context (on the main thread) not being available to the second context until a -save is performed. This is understandable since the shared persistent store won't have copies of the NSManagedObjects being tracked by -insertedObjects, -updatedObjects, and -deletedObjects are persisted.
Unfortunately, this presents a problem with the user experience: any unsaved changes won't appear in the (time consuming) reports that are generated in the background thread.
The only solution I can think of is nasty: take the inserted, updated and deleted objects from the first context and graft them onto the object graph of the second context. There are some pretty complex relations in the dataset, so I'm hesitant to go in this direction. I'm hoping someone here as a better solution.
If this is under 10.7 there are some solutions: one is you can have nested ManagedObjectContexts, so you can “save” in the one being modified and it won’t save all the way to the disk, but it will make the changes available to other children of the master context.
Before 10.7 you will probably have to copy the changes over yourself. This isn’t super-hard since you can just have a single object listen for NSManagedObjectContextObjectsDidChangeNotification and then just re-apply the changes exactly from the main context. (Should be about 20 lines of code.) You never have to save this second context I assume?
Not sure if you have any OS restraints but in iOS 5 / Mac OS 10.7 you can use nested managed object contexts to accomplish this. I believe a child context is able to pull in unsaved changes in the parent by simply doing a new fetch.
Edit: Looks like Wil beat me to it but yeah, prior to iOS 5 / Mac OS 10.7 you'll have to listen for the NSManagedObjectContextDidSaveNotification and take a look at the userInfo dictionary for the added/updated/deleted objects.
An alternate solution might involve using a single managed object context and providing your own thread safety over access to it, or use the context's lock and unlock methods.
I would try to make the main thread do a normal save so the second context can just merge the changes into his context. "fighting" a APIs intended use is never an good idea.
You could mark the newly saved record with an attribute as intermediate and delete later if the user finally cancels the edit.
Solving those problems with attributes in your entities and querying in the background thread with a matching predicated would be easy...
And that would be a stable solution as well. I am coming from a database driven world (oracle) we often use such patterns (status attributes in records) to make data visible/invisible to other DB sessions (which would equal to threads in an cocoa app). Works always without problems. Other threads /sessions do always only see commited changes that's how most RDBMS work.
I need to fill in a large (maybe not so much - several thousands of entries) dataset to a Gtk::TreeModelColumn. How do I do that without locking up the application. Is it safe to put the processing into separate thread? What parts of the application do I have to protect with a lock then? Is it only the Gtk::TreemodelColumn class, or Gtk::TreeView widget it is placed in, or maybe even surrounding frame or window?
There are two general approaches you could take. (Disclaimer: I've tried to provide example code, but I rarely use gtkmm - I'm much more familiar with GTK in C. The principles remain the same, however.)
One is to use an idle function - that runs whenever nothing's happening in your GUI. For best results, do a small amount of calculation in the idle function, like adding one item to your treeview. If you return true from the idle function, then it is called again whenever there is more processing time available. If you return false, then it is not called again. The good part about idle functions is that you don't have to lock anything. So you can define your idle function like this:
bool fill_column(Gtk::TreeModelColumn* column)
{
// add an item to column
return !column_is_full();
}
Then start the process like this:
Glib::signal_idle().connect(sigc::bind(&fill_column, column));
The other approach is to use threads. In the C API, this would involve gdk_threads_enter() and friends, but I gather that the proper way to do that in gtkmm, is to use Glib::Dispatcher. I haven't used it before, but here is an example of it. However, you can also still use the C API with gtkmm, as pointed out here.