I have a task to run continuously on one node in the ignite cluster. When the task completes or fails, it should be started again on the oldest node. How can I do this?
class MyTask {
#PostConstruct()
public void start() {
ignite.executorService(ignite.cluster().forOldest())
.submit(() -> myTask());
}
}
You can start singleton Ignite Service(https://apacheignite.readme.io/docs/service-grid) with a filter for the oldest node. It will guarantee failover safety.
Inside service's "execute" method, you can have while loop with starting the task. Using that, you can handle task completing and failing and restart it again.
Related
If I have a simple IntegrationFlow like this:
#Bean
public IntegrationFlow downloadFlow() {
return IntegrationFlows.from("rabbitQueue1")
.handle(longRunningMessageHandler)
.channel("rabbitQueue2")
.get();
}
... and if the rabbitQueue1 is filled with messages,
what should I do to handle multiple messages at the same time? Is that possible?
It seems that by default, the handler executes one message at a time.
Yes, that's true, by default endpoints are wired with DirectChannel. That's like performing a plain Java instructions one by one. So, to do some parallel job in Java you need an Executor to shift a call to the separate thread.
The same is possible with Spring Integration via an ExecutiorChannel. You can make that rabbitQueue1 as an ExecutorChannel bean or use this instead of that plain name:
IntegrationFlows.from(MessageChannels.executor("rabbitQueue1", someExecturorBean)
and all the messages arriving to this channel are going to be paralleled in the threads provided by an executor. That longRunningMessageHandler are going to process your messages in parallel.
See more info in the Reference Manual: https://docs.spring.io/spring-integration/docs/current/reference/html/#channel-implementations
I'd like to add a service that executes some initialization operations for the system when it's first created.
I'd imagine it would be a stateless service (with cluster admin rights) that should self-destruct when it's done it's thing. I am under the impression that exiting the RunAsync function allows me to indicate that I'm finished (or in an error state). However, then it still hangs around on the application's context and annoyingly looking like it's "active" when it's not really doing anything at all.
Is it possible for a service to remove itself?
I think maybe we could try using the FabricClient.ServiceManager's DeleteServiceAsync (using parameters based on the service context) inside an OnCloseAsync override but I've not been able to prove that might work and it feels a little funky:
var client = new FabricClient();
await client.ServiceManager.DeleteServiceAsync(new DeleteServiceDescription(Context.ServiceName));
Is there a better way?
Returning from RunAsync will end the code in RunAsync (indicate completion), so SF won't start RunAsync again (It would if it returned an exception, for example). RunAsync completion doesn't cause the service to be deleted. As mentioned, for example, the service might be done with background work but still listening for incoming messages.
The best way to shut down a service is to call DeleteServiceAsync. This can be done by the service itself or another service, or from outside the cluster. Services can self-delete, so for services whose work is done we typically see await DeleteServiceAsync as the last line of RunAsync, after which the method just exits. Something like:
RunAsync(CancellationToken ct)
{
while(!workCompleted && !ct.IsCancellationRequested)
{
if(!DoneWithWork())
{
DoWork()
}
if(DoneWithWork())
{
workCompleted == true;
await DeleteServiceAsync(...)
}
}
}
The goal is to ensure that if your service is actually done doing the work it cleans itself up, but doesn't trigger its own deletion for the other reasons that a CancellationToken can get signaled, such as shutting down due to some upgrade or cluster resource balancing.
As mentioned already, returning from RunAsync will end this method only, but the service will continue to run and hence not be deleted.
DeleteServiceAsync certainly is the way to go - however it's not quite as simple as just calling it because if you're not careful it will deadlock on the current thread (especially in local developer cluster). You would also likely get a few short-lived health warnings about RunAsync taking a long time to terminate and/or target replica size not being met.
In any case - solution is quite simple - just do this:
private async Task DeleteSelf(CancellationToken cancellationToken)
{
using (var client = new FabricClient())
{
await client.ServiceManager.DeleteServiceAsync(new DeleteServiceDescription(this.Context.ServiceName), TimeSpan.FromMinutes(1), cancellationToken);
}
}
Then, in last line of my RunAsync method I call:
await DeleteSelf(cancellationToken).ConfigureAwait(false);
The ConfigureAwait(false) will help with deadlock issue as it will essentially return to a new thread synchronization context - i.e. not try to return to "caller context".
I have a logging issue with Serilog 2.3 inside hangfire 1.6.8.
I have a enqueued hangfire job that uses serilog to log and after some random number of jobs it will stop logging. I will re-queue those jobs and it will randomly stop logging on a different job that previously logged. When it does fail its at random points.
I have a scheduled job using serilog that logs just fine.
There are no errors in the hangfire log which is using nlog.
The jobs continue to run and the results are correct.
I'm using the appsettingsconfig sink.
Log.Logger = new LoggerConfiguration()
.ReadFrom.AppSettings(settingPrefix: "MyJob")
.CreateLogger();
I have no idea what to do or where to look.
I think hangfire creates the object and every time a job is called it calls a method on my object. Is there some odd async issue i need to handle with serilog?
Please help!
I have created a new job that only logs and it has the exact same behavior.
public class Logging
{
public Logging()
{
// Configure logging sinks
Log.Logger = new LoggerConfiguration()
.ReadFrom.AppSettings(settingPrefix: "LoggingTest")
.CreateLogger().ForContext<Logging>();
}
public void LogSomething(string something)
{
Log.Information("Log: {0}", something);
}
}
DOH!
I see now that its some sort of static issue with Serilog. With it being static, hangfire reuses objects (my ioc) and each job is logging to each others files because it all runs under hangfires app domain, so its not really stopping the log as much as the enqueued jobs run until the scheduled job runs a minute later and moves the file location and then both the enqueued and scheduled job log to the path defined for the scheduled job.
Moved the job to nlog. I guess only one file can be used and with lots of different jobs, we log each one to a different file.
I understand that you compile the Quartz solution into an exe that can run as a Windows Service. And that somehow this Quartz server runs jobs. What I don't get is, where do I put all my Job code that actually does the work? Like let's say I want my "job" to call a stored proc, or to call a web service every hour. Or what if I have native VB code I want to execute? (And I don't want to create yet another standalone VB app that gets called from the command line.) It wasn't obvious where all these types of job config would get stored, coded, configured.
I would suggest you to read the tutorials first.
You can find lot of useful information on Jay Vilalta's blog.
Getting Started With Quartz.Net: Part 1
Getting Started With Quartz.Net: Part 2
Getting Started With Quartz.Net: Part 3
and
Scheduling Jobs Programmatically in Quartz.Net 1.0
or
Scheduling Jobs Programmatically in Quartz.Net 2.0
and then you can have a look at the github code and try the examples which are very well documented.
UPDATE:
The server is the scheduler.
It's a windows service application which runs constantly and runs the job you've scheduled with a client application.
You can even write your own server (windows services), as I did, since I didn't want to use remoting to talk to that layer.
You can decide to schedule and run jobs in a console application (I wouldn't suggest that)
with few lines of code:
class Program
{
public static StdSchedulerFactory SchedulerFactory;
public static IScheduler Scheduler;
static void Main(string[] args)
{
SchedulerFactory = new StdSchedulerFactory();
Scheduler = SchedulerFactory.GetScheduler();
var simpleTrigger = TriggerBuilder.Create()
.WithIdentity("Trigger1", "GenericGroup")
.StartNow()
.WithSimpleSchedule(x => x.RepeatForever().WithIntervalInSeconds(5))
.Build();
var simpleJob = JobBuilder.Create<SimpleJob>()
.WithIdentity("simpleJob", "GenericGroup")
.Build();
Scheduler.ScheduleJob(simpleJob, simpleTrigger);
Console.WriteLine("Scheduler started");
Scheduler.Start();
Console.WriteLine("Running jobs ...");
Console.ReadLine();
Scheduler.Shutdown(waitForJobsToComplete: false);
Console.WriteLine("Shutdown scheduler!");
}
}
This is the job:
public class SimpleJob : IJob
{
public virtual void Execute(IJobExecutionContext context)
{
JobKey jobKey = context.JobDetail.Key;
Console.WriteLine("{0} Executing job {1} ", DateTime.Now.ToString(), jobKey.Name);
}
}
You can download a test project here (VerySimpleQuartzApp).
The application above does not store trigger/jobs information.
You can decide if you want to save the information of your jobs/trigger in a XML file as explained here and here.
Or you can store your jobs/triggers in a database so that the scheduler - usually a windows service - can read those info and run the jobs.
If you're looking for an example where you can communicate with the server (via remoting) and schedule a job and run it, this is the one.
There are a few open source projects (managers) which you can use to start/stop or schedule jobs. Read my answer here.
I can't see a way to see which tasks are running. There is the Task.Current property, but what if there are multiple tasks running? Is there a way to get this kind of information?
Alternately, is there a built in way to get notified when a task starts or completes?
Hey Mike, there is no public way of accessing the list of pending tasks in TPL. The mechanism that makes it available for the debugger relies on the fact that all threads will be frozen at enumeration time, therefore it can't be used at runtime.
Yes, there's a built in way to get notified whan a task completes. Check out the Task.ContinueWith APIs. Basically this API creates a new task that will fired up when the target task completes.
I'm assuming you want to do some quick accounting / progress reporting based on this, if that's the case, I'd recommend that you call task.ContinueWith() with the TaskContinuationOptions.ExecuteSynchronously flag. When you specify that the continuation action will be run right there on the same thread when the target task finishes (if you don't specify this the continuation task is queued up like any other regular task).
Hope this helps.
Huseyin
You can also get the currently running task (or a Task's parent) with reflection:
static public class Extensions
{
public static Task Parent(this Task t)
{
FieldInfo info = typeof(Task).GetField("m_parent",
BindingFlags.NonPublic | BindingFlags.Instance);
return info != null ? (Task)info.GetValue(t) : null;
}
public static Task Self
{
get
{
return Task.Factory.StartNew(
() => { },
CancellationToken.None,
TaskCreationOptions.AttachedToParent,
TaskScheduler.Default).Parent();
}
}
};
You can create a TaskScheduler class deriving from the provided one. Within that class you have full control and can add logging either side of the execution. See for example: http://msdn.microsoft.com/en-us/library/ee789351.aspx
You'll also need to use a Taskfactory with an instance of your class as the scheduler.