How to enforce single instance of an application under mono? - mono

So, I am able to enforce single instance of my application on Windows as follows.
[STAThread]
class method Program.Main(args: array of string);
begin
var mutex := new Mutex(true, "{8F6F0AC4-B9A1-45fd-A8CF-72F04E6BDE8F}");
if mutex.WaitOne(Timespan.Zero, true) then
begin
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.ThreadException += OnThreadException;
lMainForm := new MainForm;
lMainForm.ShowInTaskbar := true;
lMainForm.Visible := false;
Application.Run(lMainForm);
end
else
MessageBox.Show("Another copy running!!!");
end;
However, running the same application on Linux under mono this code does NOT work at all. I am able to run multiple copies. I don't know if it has to do with the fact that I am starting the application on the Terminal like mono MyPro.exe. If this is the problem, do you need to pass some values before you execute the command line.
Thanks in advance,

You need to enable shared memory in mono as Adrian Faciu has mentioned to make your approach work, however this is not the best approach (there's a reason it's disabled by default in the first place, even if I can't remember now exactly why).
I've used two solutions in the past:
A file-based lock. Create a known file, write the pid into that file. At startup in your app check if the file exists, and if it exists read the pid and check if there are any running processes with that pid (so that it can recover from crashes). And delete the file upon exit (in the instance that created it in the first place). The drawback is that there is a race condition at startup if several instances are launched pretty much at the same time. You can improve this with file locking, but you may have to use P/Invokes to do the proper file locking on Linux (I'm not entirely sure the managed API would do what you'd expect).
A socked-based lock. Open a known port. The advantage over the above is that you don't need to do any cleanup and there are no race conditions. The drawback is that you need a fixed/known port, and some other program might happen to use that exact port at the same time.

You probably need to enable shared handles using MONO_ENABLE_SHM environment variable:
MONO_ENABLE_SHM=1 mono MyPro.exe

Adrian's solution works for me(tm).
Wild guess: did you need MONO_ENABLE_SHM in both invocations?

Related

Is it possible to execute a branch as a different user in *nix?

Is it possible to execute a method as a different user in Linux (or SELinux specifically)? The programs that I have run in individual sandboxes, each with a different user and process id. I have a situation where I have to execute a branch of code as a different user and with different process id to prevent the access of the memory and disk space of the code that's spawning it.
If not possible, can you throw some light on how much of the kernel code has to be changed to achieve it? (I understand its subjective. Alternatively, if you can suggest what and how to go about it, that will be much helpful).
Protecting some resources from other codes executing on the same machine is precisely what lead to the process and UID invention.
If you are searching for a mechanism that looks like a simple function call, I would say it's impossible because it requires the memory to be shared between the caller and the callee. However, using fork/exec (or wrappers like system()) will give you some isolation as long as you deal with parameters/results using system objects like program parameters or pipes.
Although, the fact that *nix user is meant to protect processes from one-another, requires that an explicit relationship be built between two users to have one user act on behalf of the other.
Actually, you may want to:
define a sudoers policy which gives the right to your first user to run a command (or a particular command) as the second one.
use popen() (or system()) in your first program to call the less privileged code.
if any, pass the parameters and parse the result from stdout
As an extra, you may use the same binary for both executions, this way, all the code can be at the same location.

Is there a way in vb.net to make process start closing my program?

My program checks if there is a new version of itself. If yes it would exit and start an updater that replaces it and then restarts.
My problem is that I haven't found any info on how to make process start right after closing the actual program.
Any suggestions?
Thanks in advance
I intended to add a comment, but I'm too low in points here. The updater itself should probably contain a check to determine whether your application is running an instance, and it should contain a timeout loop that performs this check and factor the timeout following it's startup state. That way you can awaken it, and close your application. The updater should just determine your application is not running, compare versions perform the intended update operation.
a possible solution would also be to create a task via tash sceduler or cron job, starting an out of process application, like CMD.exe.. which brings me to my original comment-question: in regards to what Operating System(s) and Platform(s) is your program intended for?

How to check if another instance of the app/binary is already running

I'm writing a command line application in Mac using Objective-c
At the start of the application, i want to check if another instance of the same application is already running. If it is, then i should be either wait for it to finish or exit the current instance or quit the other instance etc.
Is there any way of doing this?
The standard Unix solution for this is to create a "run file". When you start up, you try to create that file and write your pid to it if it doesn't exist; if it does exist, read the pid out of it, and if there's a running program with that pid and your process name, wait/exit/whatever.
The question is, where do you put that file, and what do you call it?
Well, first, you have to decide what exactly "already running" means. Obviously not "anywhere in the world", but it could be anything from "anywhere on the current machine" to "in the current desktop session". (For example, if User A starts your program, pauses it, then User B comes along and takes over the computer via Fast User Switching, should she be able to run the program, or not?)
For pretty much any reasonable answer to that question, there's an obvious pathname pattern. For example, on a Mac, /tmp is shared system-wide, while $TMPDIR is specific to a given session, so, e.g., /tmp/${ARGV[0]}.pid is a good way to say "only one copy on the machine, period", while ${TMPDIR}/${ARGV[0]}.pid is a good way to say "only one copy per session".
Simple but common way to do this is to check the process list for the name of your executable.
ps - A | grep <your executable name>
Thank you #abarnert.
This is how I have presently implemented. At the start of the main(), I would check if a file named .lock exists in the binary's own directory (I am considering moving it to /tmp). If it is, application exits.
If not, it would create the file.
At the end of the application, the .lock file is removed
I haven't yet written the pid to that file, but I will when exiting the previous instance is required (as of yet I don't need it, but may in the future).
I think PID can be retrieved using
int myPID=[[NSProcessInfo processInfo] processIdentifier];
The program will be invoked by a custom scheduler which is running as a root daemon. So it would be run as root.
Seeing the answers, I would assume that there is no direct method of solving the problem.

Is it possible to create a file that is deleted when its (owning) process goes away?

I want to create a file on Mac OS X (10.6) that will be deleted automatically when my process goes away. Is this possible? It would be very handy for a file locking scheme I am implementing. Preferably as a Cocoa or Carbon call.
I know that on Windows, this is possible. It's a very neat feature, but I don't know if it is something that needs to be supported by the file system.
On win32 you can call CreateFile with FILE_FLAG_DELETE_ON_CLOSE.
In .net you can create a FileStream with FileOptions.DeleteOnClose as argument.
If you are writing your own program, you could use tmpfile() call.
It creates a temporary file that get removed automatically on program termination.
You could have your app delegate to create and delete the file via the NSApplicationDelegate, however, the file would remain there if the user force quits/shut down. If force quitting is not part of your concern, then this should work. If not, you can create a simple launch agent that checks if your process exists, and if not, delete the file.
You can register an atexit() handler to delete the file, but this will not necessarily be completely reliable, particularly if the program crashes.
If you want proper file locking, consider using flock(), although, it is cooperative.

Platform independent file locking?

I'm running a very computationally intensive scientific job that spits out results every now and then. The job is basically to just simulate the same thing a whole bunch of times, so it's divided among several computers, which use different OSes. I'd like to direct the output from all these instances to the same file, since all the computers can see the same filesystem via NFS/Samba. Here are the constraints:
Must allow safe concurrent appends. Must block if some other instance on another computer is currently appending to the file.
Performance does not count. I/O for each instance is only a few bytes per minute.
Simplicity does count. The whole point of this (besides pure curiosity) is so I can stop having every instance write to a different file and manually merging these files together.
Must not depend on the details of the filesystem. Must work with an unknown filesystem on an NFS or Samba mount.
The language I'm using is D, in case that matters. I've looked, there's nothing in the standard lib that seems to do this. Both D-specific and general, language-agnostic answers are fully acceptable and appreciated.
Over NFS you face some problems with client side caching and stale data. I have written an OS independent lock module to work over NFS before. The simple idea of creating a [datafile].lock file does not work well over NFS. The basic idea to work around it is to create a lock file [datafile].lock which if present means file is NOT locked and a process that wants to acquire a lock renames the file to a different name like [datafile].lock.[hostname].[pid]. The rename is an atomic enough operation that works well enough over NFS to guarantee exclusivity of the lock. The rest is basically a bunch of fail safe, loops, error checking and lock retrieval in case the process dies before releasing the lock and renaming the lock file back to [datafile].lock
The classic solution is to use a lock file, or more accurately a lock directory. On all common OSs creating a directory is an atomic operation so the routine is:
try to create a lock directory with a fixed name in a fixed location
if the create failed, wait a second or so and try again - repeat until success
write your data to the real data file
delete the lock directory
This has been used by applications such as CVS for many years across many platforms. The only problem occurs in the rare cases when your app crashes while writing and before removing the lock.
Why not just build a simple server which sits between the file and the other computers?
Then if you ever wanted to change the data format, you would only have to modify the server, and not all of the clients.
In my opinion building a server would be much easier than trying to use a Network file system.
Lock File with a twist
Like other answers have mentioned, the easiest method is to create a lock file in the same directory as the datafile.
Since you want to be able to access the same file over multiple PC the best solution I can think of is to just include the identifier of the machine currently writing to the data file.
So the sequence for writing to the data file would be:
Check if there is a lock file present
If there is a lock file, see if I'm the one owning it by checking that its content has my identifier.
If that's the case, just write to the data file then delete the lock file.
If that's not the case, just wait a second or a small random length of time and try the whole cycle again.
If there is no lock file, create one with my identifier and try the whole cycle again to avoid race condition (re-check that the lock file is really mine).
Along with the identifier, I would record a timestamp in the lock file and check whether it's older than a given timeout value.
If the timestamp is too old, then assume that the lock file is stale and just delete it as it would mea one of the PC writing to the data file may have crashed or its connection may have been lost.
Another solution
If you are in control the format of the data file, could be to reserve a structure at the beginning of the file to record whether it is locked or not.
If you just reserve a byte for this purpose, you could assume, for instance, that 00 would mean the data file isn't locked, and that other values would represent the identifier of the machine currently writing to it.
Issues with NFS
OK, I'm adding a few things because Jiri Klouda correctly pointed out that NFS uses client-side caching that will result in the actual lock file being in an undetermined state.
A few ways to solve this issue:
mount the NFS directory with the noac or sync options. This is easy but doesn't completely guarantee data consistency between client and server though so there may still be issues although in your case it may be OK.
Open the lock file or data file using the O_DIRECT, the O_SYNC or O_DSYNC attributes. This is supposed to disable caching altogether.
This will lower performance but will ensure consistency.
You may be able to use flock() to lock the data file but its implementation is spotty and you will need to check if your particular OS actually uses the NFS locking service. It may do nothing at all otherwise.
If the data file is locked, then another client opening it for writing will fail.
Oh yeah, and it doesn't seem to work on SMB shares, so it's probably best to just forget about it.
Don't use NFS and just use Samba instead: there is a good article on the subject and why NFS is probably not the best answer to your usage scenario.
You will also find in this article various methods for locking files.
Jiri's solution is also a good one.
Basically, if you want to keep things simple, don't use NFS for frequently-updated files that are shared amongst multiple machines.
Something different
Use a small database server to save your data into and bypass the NFS/SMB locking issues altogether or keep your current multiple data files system and just write a small utility to concatenate the results.
It may still be the safest and simplest solution to your problem.
I don't know D, but I thing using a mutex file to do the jobe might work. Here's some pseudo-code you might find useful:
do {
// Try to create a new file to use as mutex.
// If it's already created, it will throw some kind of error.
mutex = create_file_for_writing('lock_file');
} while (mutex == null);
// Open your log file and write results
log_file = open_file_for_reading('the_log_file');
write(log_file, data);
close_file(log_file);
close_file(mutex);
// Free mutex and allow other processes to create the same file.
delete_file(mutex);
So, all processes will try to create the mutex file but only the one who wins will be able to continue. Once you write your output, close and delete the mutex so other processes can do the same.