In dotnet core how can I ensure only one copy of my application is running? - singleton

In the past I have done something like this
private static bool AlreadyRunning()
{
var processes = Process.GetProcesses();
var currentProc = Process.GetCurrentProcess();
logger.Info($"Current proccess: {currentProc.ProcessName}");
foreach (var process in processes)
{
if (currentProc.ProcessName == process.ProcessName && currentProc.Id != process.Id)
{
logger.Info($"Another instance of this process is already running: {process.Id}");
return true;
}
}
return false;
}
Which has worked well. In the new dotnet core world everything has a process name of dotnet so I can only run one dotnet app at a time! Not quite what I want :D
Is there an ideal way of doing this in dotnet? I see mutex suggested but I am not sure I understand the possible downsides or error states running on other systems than a windows machine.

.NET Core now supports global named mutex. From PR description, that added that functionality:
On systems that support thread process-shared robust recursive mutexes, they will be used
On other systems, file locks are used. File locks, unfortunately, don't have a timeout in the blocking wait call, and I didn't find any other sync object with a timed wait with the necessary properties, so polling is done for timed waits.
Also, there is a useful note in Named mutex not supported on Unix issue about mutex name, that should be used:
By default, names have session scope and sessions are more granular on Unix (each terminal gets its own session). Try adding a "Global" prefix to the name minus the quotes.

In the end I used a mutex and it seeeeeems okay.
I grabbed the code from here
What is a good pattern for using a Global Mutex in C#?
The version of core I am using does not seem to have some of the security settings stuff so I just deleted it. I am sure it will be fine. (new Mutex only takes 3 parameters)
private static void Main(string[] args)
{
LogManager.Configuration = new XmlLoggingConfiguration("nlog.config");
logger = LogManager.GetLogger("console");
logger.Info("Trying to start");
const string mutexId = #"Global\{{guid-guid-guid-guid-guid}}";
bool createdNew;
using (var mutex = new Mutex(false, mutexId, out createdNew))
{
var hasHandle = false;
try
{
try
{
hasHandle = mutex.WaitOne(5000, false);
if (!hasHandle)
{
logger.Error("Timeout waiting for exclusive access");
throw new TimeoutException("Timeout waiting for exclusive access");
}
}
catch (AbandonedMutexException)
{
hasHandle = true;
}
// Perform your work here.
PerformWorkHere();
}
finally
{
if (hasHandle)
{
mutex.ReleaseMutex();
}
}
}
}

Related

How to determine job's queue at runtime

Our web app allows the end-user to set the queue of recurring jobs on the UI. (We create a queue for each server (use server name) and allow users to choose server to run)
How the job is registered:
RecurringJob.AddOrUpdate<IMyTestJob>(input.Id, x => x.Run(), input.Cron, TimeZoneInfo.Local, input.QueueName);
It worked properly, but sometimes we check the log on Production and found that it runs on the wrong queue (server). We don't have more access to Production so that we try to reproduce at Development but it's not happened.
To temporarily fix this issue, we need to get the queue name when the job running, then compare it with the current server name and stop it when they are diferent.
Is it possible and how to get it from PerformContext?
Noted: We use HangFire version: 1.7.9 and ASP.NET Core 3.1
You may have a look at https://github.com/HangfireIO/Hangfire/pull/502
A dedicated filter intercepts the queue changes and restores the original queue.
I guess you can just stop the execution in a very similar filter, or set a parameter to cleanly stop execution during the IElectStateFilter.OnStateElection phase by changing the CandidateState to FailedState
Maybe your problem comes from an already existing filter which messes up with the queues.
Here is the code from the link above :
public class PreserveOriginalQueueAttribute : JobFilterAttribute, IApplyStateFilter
{
public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
var enqueuedState = context.NewState as EnqueuedState;
// Activating only when enqueueing a background job
if (enqueuedState != null)
{
// Checking if an original queue is already set
var originalQueue = JobHelper.FromJson<string>(context.Connection.GetJobParameter(
context.BackgroundJob.Id,
"OriginalQueue"));
if (originalQueue != null)
{
// Override any other queue value that is currently set (by other filters, for example)
enqueuedState.Queue = originalQueue;
}
else
{
// Queueing for the first time, we should set the original queue
context.Connection.SetJobParameter(
context.BackgroundJob.Id,
"OriginalQueue",
JobHelper.ToJson(enqueuedState.Queue));
}
}
}
public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
}
}
I have found the simple solution: since we have known the Recurring Job Id, we can get its information from JobStorage and compare it with the current queue (current server name):
public bool IsCorrectQueue()
{
List<RecurringJobDto> recurringJobs = Hangfire.JobStorage.Current.GetConnection().GetRecurringJobs();
var myJob = recurringJobs.FirstOrDefault(x => x.Id.Equals("My job Id"));
var definedQueue = myJob.Queue;
var currentServerQueue = string.Concat(Environment.MachineName.ToLowerInvariant().Where(char.IsLetterOrDigit));
return definedQueue == "default" || definedQueue == currentServerQueue;
}
Then check it inside the job:
public async Task Run()
{
//Check correct queue
if (!IsCorrectQueue())
{
Logger.Error("Wrong queue detected");
return;
}
//Job logic
}

Can I have per AppDomain Environment Variables in C#/.net?

In a multi appdomain setup, is there a way to make SetEnvironementVariables and Get.... work within the appdomain only, so each appdomain can have different values for the same variable?
No. :(
This example:
namespace ConsoleApplication
{
class Program
{
static void Main(string[] args)
{
var newDomain = AppDomain.CreateDomain("Alternative");
Proxy proxyObj = (Proxy)newDomain.CreateInstanceAndUnwrap(typeof(Proxy).Assembly.GetName().FullName,
typeof(Proxy).FullName);
Environment.SetEnvironmentVariable("HELLO_MSG", "Hello World", EnvironmentVariableTarget.Process);
proxyObj.ShowEnvironmentVariable();
Console.ReadKey();
}
}
class Proxy : MarshalByRefObject
{
public void ShowEnvironmentVariable()
{
var msg = Environment.GetEnvironmentVariable("HELLO_MSG");
Console.WriteLine(String.Format("{0} (from '{1}' AppDomain)", msg, AppDomain.CurrentDomain.FriendlyName));
}
}
}
Will output:
Hello World (from 'Alternative' AppDomain)
The process is the most specific level of encapsulation for environment variables, and AppDomains will still "live inside" the same process.
Note that this will happen for all other process-level information (such as Directory.GetCurrentDirectory(), command-line args, etc.
One possible solution would create worker processes (".exe" applications spawned from the main process), but that will certainly add some complexity to your application.

Injecting Variables into a running Process

Is there a way to inject a variable into a running process without a process listening for RPC requests?
For example if a process was running and using an environment variable, could I change that environment variable at runtime and make the process use the new value?
Are there alternative solutions for dynamically changing variables in a running process? Assume that this process is like a PHP process or a Javascript (node.js) process so I can change the source code... etc.
I think this is similar to passing state or communicating to another process, but I need a really lightweight way of doing so, without going over the network or using libraries or preferably not setting up an RPC server.
Solution does not have to be cross-platform. Prefer Linux.
You can do it it java. Imagine this is your thread class:
public void ThreadClass extends Thread {
Boolean state;
ThreadClass(Boolean b) {
state = b;
}
public void StopThread() {
state = false;
}
public void run() {
while(state) { //Do whatever you want here}
}
}
Now all you have to do is start this thread from your main class:
ThreadClass thread = new ThreadClass(true);
thread.start();
And if you want to change the value of state, call the StopThread method in the thread like so:
try {
thread.StopThread();
} catch (InterruptedException ex) {
Logger.getLogger(NewClass.class.getName()).log(Level.SEVERE, null, ex);
}
This will change the state of the Boolean while the thread is running.
It appears that local IPC implementations like shared memory is the way to go: Fastest technique to pass messages between processes on Linux?

How can a RabbitMQ Client tell when it loses connection to the server?

If I'm connected to RabbitMQ and listening for events using an EventingBasicConsumer, how can I tell if I've been disconnected from the server?
I know there is a Shutdown event, but it doesn't fire if I unplug my network cable to simulate a failure.
I've also tried the ModelShutdown event, and CallbackException on the model but none seem to work.
EDIT-----
The one I marked as the answer is correct, but it was only part of the solution for me. There is also HeartBeat functionality built into RabbitMQ. The server specifies it in the configuration file. It defaults to 10 minutes but of course you can change that.
The client can also request a different interval for the heartbeat by setting the RequestedHeartbeat value on the ConnectionFactory instance.
I'm guessing that you're using the C# library? (but even so I think the others have a similar event).
You can do the following:
public class MyRabbitConsumer
{
private IConnection connection;
public void Connect()
{
connection = CreateAndOpenConnection();
connection.ConnectionShutdown += connection_ConnectionShutdown;
}
public IConnection CreateAndOpenConnection() { ... }
private void connection_ConnectionShutdown(IConnection connection, ShutdownEventArgs reason)
{
}
}
This is an example of it, but the marked answer is what lead me to this.
var factory = new ConnectionFactory
{
HostName = "MY_HOST_NAME",
UserName = "USERNAME",
Password = "PASSWORD",
RequestedHeartbeat = 30
};
using (var connection = factory.CreateConnection())
{
connection.ConnectionShutdown += (o, e) =>
{
//handle disconnect
};
using (var model = connection.CreateModel())
{
model.ExchangeDeclare(EXCHANGE_NAME, "topic");
var queueName = model.QueueDeclare();
model.QueueBind(queueName, EXCHANGE_NAME, "#");
var consumer = new QueueingBasicConsumer(model);
model.BasicConsume(queueName, true, consumer);
while (!stop)
{
BasicDeliverEventArgs args;
consumer.Queue.Dequeue(5000, out args);
if (stop) return;
if (args == null) continue;
if (args.Body.Length == 0) continue;
Task.Factory.StartNew(() =>
{
//Do work here on different thread then this one
}, TaskCreationOptions.PreferFairness);
}
}
}
A few things to note about this.
I'm using # for the topic. This grabs everything. Usually you want to limit by a topic.
I'm setting a variable called "stop" to determine when the process should end. You'll notice the loop runs forever until that variable is true.
The Dequeue waits 5 seconds then leaves without getting data if there is no new message. This is to ensure we listen for that stop variable and actually quit at some point. Change the value to your liking.
When a message comes in I spawn the handling code on a new thread. The current thread is being reserved for just listening to the rabbitmq messages and if a handler takes too long to process I don't want it slowing down the other messages. You may or may not need this depending on your implementation. Be careful however writing the code to handle the messages. If it takes a minute to run and your getting messages at sub-second times you will run out of memory or at least into severe performance issues.

.NET 4.0 Threading.Tasks

I've recently started working on a new application which will utilize task parallelism. I have just begun writing a tasking framework, but have recently seen a number of posts on SO regarding the new System.Threading.Tasks namespace which may be useful to me (and I would rather use an existing framework than roll my own).
However looking over MSDN I haven't seen how / if, I can implement the functionality which I'm looking for:
Dependency on other tasks completing.
Able to wait on an unknown number of tasks preforming the same action (maybe wrapped in the same task object which is invoked multiple times)
Set maximum concurrent instances of a task since they use a shared resource there is no point running more than one at once
Hint at priority, or scheduler places tasks with lower maximum concurrent instances at a higher priority (so as to keep said resource in use as much as possible)
Edit ability to vary the priority of tasks which are preforming the same action (pretty poor example but, PredictWeather (Tommorrow) will have a higher priority than PredictWeather (NextWeek))
Can someone point me towards an example / tell me how I can achieve this? Cheers.
C# Use Case: (typed in SO so please for give any syntax errors / typos)
**note Do() / DoAfter() shouldn't block the calling thread*
class Application ()
{
Task LeafTask = new Task (LeafWork) {PriorityHint = High, MaxConcurrent = 1};
var Tree = new TaskTree (LeafTask);
Task TraverseTask = new Task (Tree.Traverse);
Task WorkTask = new Task (MoreWork);
Task RunTask = new Task (Run);
Object SharedLeafWorkObject = new Object ();
void Entry ()
{
RunTask.Do ();
RunTask.Join (); // Use this thread for task processing until all invocations of RunTask are complete
}
void Run(){
TraverseTask.Do ();
// Wait for TraverseTask to make sure all leaf tasks are invoked before waiting on them
WorkTask.DoAfter (new [] {TraverseTask, LeafTask});
if (running){
RunTask.DoAfter (WorkTask); // Keep at least one RunTask alive to prevent Join from 'unblocking'
}
else
{
TraverseTask.Join();
WorkTask.Join ();
}
}
void LeafWork (Object leaf){
lock (SharedLeafWorkObject) // Fake a shared resource
{
Thread.Sleep (200); // 'work'
}
}
void MoreWork ()
{
Thread.Sleep (2000); // this one takes a while
}
}
class TaskTreeNode<TItem>
{
Task LeafTask; // = Application::LeafTask
TItem Item;
void Traverse ()
{
if (isLeaf)
{
// LeafTask set in C-Tor or elsewhere
LeafTask.Do(this.Item);
//Edit
//LeafTask.Do(this.Item, this.Depth); // Deeper items get higher priority
return;
}
foreach (var child in this.children)
{
child.Traverse ();
}
}
}
There are numerous examples here:
http://code.msdn.microsoft.com/ParExtSamples
There's a great white paper which covers a lot of the details you mention above here:
"Patterns for Parallel Programming: Understanding and Applying Parallel Patterns with the .NET Framework 4"
http://www.microsoft.com/downloads/details.aspx?FamilyID=86b3d32b-ad26-4bb8-a3ae-c1637026c3ee&displaylang=en
Off the top of my head I think you can do all the things you list in your question.
Dependencies etc: Task.WaitAll(Task[] tasks)
Scheduler: The library supports numerous options for limiting number of threads in use and supports providing your own scheduler. I would avoid altering the priority of threads if at all possible. This is likely to have negative impact on the scheduler, unless you provide your own.