Does a new process is created when syscall is called in Minix? - process

For example, when we call write(...) in program in minix. Does a new process is created(like with fork()) or is it done within current process?
Is it efficient to make a lot of syscalls?

Process creation is strictly fork's / exec's job. What kind of process could a system call like write possibly spawn?
Now, Minix is a microkernel, meaning that things like file systems run in userland processes. Writing to a file could therefore possibly spawn a new process somewhere else, but that depends on your file system driver. I haven't paid attention to the MinixFS driver so far, so I can't tell you whether that happens -- but it's not very likely, process creation still being relatively expensive.
It's almost never efficient to make a lot of syscalls (context switches being involved). However, "performant", "efficient" and "a lot" are all very relative things, so I can't tell you something you probably don't know already.

Related

Labview Program changes behavior after looking at (not changing) the Block Diagram

My Labview Program works like a charm, until I look at the Block Diagram. No changes are made. I do not Save. Just Ctrl+E and then Ctrl+R.
Now it does not work properly. Only a Restart of Labview fixes the problem.
My Program controls two Scanner arrays for Laser Cutting simultaneously. To force parallel working, I use the Error handler and loops that wait for a signal from the Scanner. But suddenly some loops run more often than they should.
What does majorly happen in Labview when I open the Block diagram that messes with my code?
Edit:
Its hard to tell what is happening without violating my non-disclosure agreement.
I'm controlling two independent mirror-Arrays for Laser Cutting. While one is running one Cutting-Job, the other is supposed to run the other Jobs. Just very fast. When the first is finished they meet at the same position and run the same geometry at the same slow speed. The jobs are provided as *.XML and stored as .net Objects. The device only runs the most recent job and overwrites it when getting a new one.
I can check if a job is still running. While this is true I run a while loop for the other jobs. Now this loop runs a few times too often and even ignores WAIT-blocks to a degree. Also it skips the part where it reads the XML job file, changes the speed part back to fast again and saves it. It only runs one time fast.
#Joe: No it does not. It only runs once well. afterwards it does not.
Youtube links
The way it is supposed to move
The wrong way
There is exactly one thing I can think of that changes solely by opening the block diagram.
When the block diagram opens, any commented-out or unreachable-code-compiler-eliminated sections of code will load their subVIs. If one of those commented out sections of code were somehow interfere with your running code, you might have an issue.
There are only two ways I know of for that to interfere... both of them are fairly improbable.
a) You have some sort of "check for all VIs in memory" or "check for all types in memory" that you're using as a plug-in system. When the commented-out sections load, that would change the VIs in memory. Such systems are not uncommon when parsing XML, so maybe.
b) You are using Run VI method for some dynamically invoked VI to execute as a top-level VI, but by loading the diagram, it discovers that it is a subVI of your current program. A VI cannot simultaneously be top-level and a subVI, so the call to Run VI returns an error.
That's it. I can't think of anything else. Both ideas seem unlikely, but given your claim and a lack of a block diagram, I figured I'd post it as a hypothesis.
In the improbable case someone has a similar problem. The problem was a xml file that was read during run time. Sometimes multiple instances tried to access it and this produced the error.
Quick point to check: are Debug and "retain data in wires" disabled? While it may not change the computations, but it may certainly change the timing of very tight loops, and that was one of the unexpected program behaviors, OP was referring to.

See when an executable opens and closes

How can I get a callback within my application when an executable (such as pbs, cp, etc) launches and then exits? This would need to work only knowing the path to the executable.
You could move the original executable aside, and replace it with a wrapper that runs the original, reporting when it runs and exits.
You could look at the accton and lastcomm commands, which record the start and exit of every process on the system.
You could look into using dtrace, which can definitely do what you're asking but it's rather complicated to use. You'd probably have to do a fair amount of learning to do this. I don't know much about writing dtrace scripts, but I'd probably start with execsnoop as my model.

Design principles as to how linux repository managers update themselves?

I know there are other applications also, but considering yum/apt-get/aptitude/pacman are you core package managers for linux distributions.
Today I saw on my fedora 13 box:
(7/7): yum-3.2.28-4.fc13_3.2.28-5.fc13.noarch.drpm | 42 kB 00:00
And I started to wonder how does such a package update itself? What design is needed to ensure a program can update itself?
Perhaps this question is too general but I felt SO was more appropriate than programmers.SE for such a question being that it is more technical in nature. If there is a more appropriate place for this question feel free to let me know and I can close or a moderator can move.
Thanks.
I've no idea how those particular systems work, but...
Modern unix systems will generally tolerate overwriting a running executable without a hiccup, so in theory you could just do it.
You could do it in a chroot jail and then move or something similar to reduce the time during which the system is vulnerable. Add a journalling filesystem and this is a little safer still.
It occurs to me that the package-manager needs to hold the package access database in memory as well to insure against a race condition there. Again, the chroot jail and copy option is available as a lower risk alternative.
And I started to wonder how does such a package update itself? What
design is needed to ensure a program can update itself?
It's like a lot of things, you don't need to "design" specifically to solve this problem ... but you do need to be aware of certain "gotchas".
For instance Unix helps by reference counting inodes so "you" can delete a file you are still using, and it's fine. However this implies a few things you have to do, for instance if you have plugins then you need to load them all before you run start a transaction ... even if the plugin would only run at the end of the transaction (because you might have a different version at the end).
There are also some things that you need to do to make sure that anything you are updating works, like: Put new files down before removing old files. And don't truncate old files, just unlink. But those also help you :).
Using external problems, which you communicate with, can be tricky (because you can't exec a new copy of the old version after it's been updated). But this isn't often done, and when it is it's for things like downloading ... which can somewhat easily be made to happen before any updates.
There are also things which aren't a concern in the cmd line clients like yum/apt, for instance if you have a program which is going to run 2+ "updates" then you can have problems if the first update was to the package manager. downgrades make this even more fun :).
Also daemon like processes should basically never "load" the package manager, but as with other gotchas ... you tend to want to follow this anyway, for other reasons.

Daemon with Clojure/JVM

I'd like to have a small (not doing too damn much) daemon running on a little server, watching a directory for new files being added to it (and any directories in the main one), and calling another Clojure program to deal with that new file.
Ideally, each file would be added to a queue (a list represented by a ref in Clojure?) and the main process would take care of those files in the queue on a FIFO basis.
My question is: is having a JVM up running this little program all the time too much a resource hog? And do you have any suggestions as to how go about doing this?
Thank you very much!
EDIT: Another question I should ask: should I run this as its own instance (using less memory) and have it launch a new JVM when a file is seen, or have it on the same JVM the Clojure code that will process the file?
As long as it is running fine now and it has no memory leaks it should be fine.
From the daemon terminology I gather it is on a unix clone, and in this case best is to start it from an init script, or from the rc.local script. Unfortunately details differ from OS to OS to be more specific.
Limit the memry using -Xmx=64m or something to make sure it fails before taking down the rest of the services. Play a bit with the number to find the lowest reliable size.
Also, since clojures claim to fame is its ability to deal with concurrency it make a lot of sense to only run one JVM with all functionality running on it in multiple threads. The overhead of spawning new processes is already very big and if it is a JVM which needs to JIT and warm up its memory management, doubly so. On a resource constrained machine could pose a problem. and on a resource rich machine this is a waste.
I always found that the JVM is not made to quickly run something script like and exit again. It is really not made for that use case in my opinion
.

How would I go about taking a snapshot of a process to preserve its state for future investigation? Is this possible?

Whether this is possible I don't know, but it would mighty useful!
I have a process that fails periodically (running in Windows 2000). I then have just one chance to react to it before having to restart it and painfully wait for it to fail again. I didn't write the process so don't have the source to debug. The failure is seemingly random.
With a snapshot of the process I could repeatedly and quickly test reactions to the failure.
I had thought of running inside a VM but this isn't possible in this instance.
EDIT:
#Jon Cage asked:
When you say a snapshot, you mean capturing a process when it's about to fail (including memory, program state etc. etc.) ...and then replaying it's final few seconds repeatedly to see what effect it has on some other component?
This is exactly what I mean!
I think minidump is what you are looking for.
You can also used Userdump:
The User Mode Process Dumper
(userdump) dumps any running Win32
processes memory image (including
system processes such as csrss.exe,
winlogon.exe, services.exe, etc) on
the fly, without attaching a debugger,
or terminating target processes.
Generated dump file can be analyzed or
debugged by using the standard
debugging tools.
This article shows you how to use it.
My best bet is to start the process in a debugger (OllyDbg being my preferred tool).
The process will pause on an exception, and you can try to figure out what happened shortly before that.
This needs some understanding of assembler and does not allow to create a snapshot of the process for later analysis. You would need to write your own debugger for that - it should be theoretically possible.