I understand that you compile the Quartz solution into an exe that can run as a Windows Service. And that somehow this Quartz server runs jobs. What I don't get is, where do I put all my Job code that actually does the work? Like let's say I want my "job" to call a stored proc, or to call a web service every hour. Or what if I have native VB code I want to execute? (And I don't want to create yet another standalone VB app that gets called from the command line.) It wasn't obvious where all these types of job config would get stored, coded, configured.
I would suggest you to read the tutorials first.
You can find lot of useful information on Jay Vilalta's blog.
Getting Started With Quartz.Net: Part 1
Getting Started With Quartz.Net: Part 2
Getting Started With Quartz.Net: Part 3
and
Scheduling Jobs Programmatically in Quartz.Net 1.0
or
Scheduling Jobs Programmatically in Quartz.Net 2.0
and then you can have a look at the github code and try the examples which are very well documented.
UPDATE:
The server is the scheduler.
It's a windows service application which runs constantly and runs the job you've scheduled with a client application.
You can even write your own server (windows services), as I did, since I didn't want to use remoting to talk to that layer.
You can decide to schedule and run jobs in a console application (I wouldn't suggest that)
with few lines of code:
class Program
{
public static StdSchedulerFactory SchedulerFactory;
public static IScheduler Scheduler;
static void Main(string[] args)
{
SchedulerFactory = new StdSchedulerFactory();
Scheduler = SchedulerFactory.GetScheduler();
var simpleTrigger = TriggerBuilder.Create()
.WithIdentity("Trigger1", "GenericGroup")
.StartNow()
.WithSimpleSchedule(x => x.RepeatForever().WithIntervalInSeconds(5))
.Build();
var simpleJob = JobBuilder.Create<SimpleJob>()
.WithIdentity("simpleJob", "GenericGroup")
.Build();
Scheduler.ScheduleJob(simpleJob, simpleTrigger);
Console.WriteLine("Scheduler started");
Scheduler.Start();
Console.WriteLine("Running jobs ...");
Console.ReadLine();
Scheduler.Shutdown(waitForJobsToComplete: false);
Console.WriteLine("Shutdown scheduler!");
}
}
This is the job:
public class SimpleJob : IJob
{
public virtual void Execute(IJobExecutionContext context)
{
JobKey jobKey = context.JobDetail.Key;
Console.WriteLine("{0} Executing job {1} ", DateTime.Now.ToString(), jobKey.Name);
}
}
You can download a test project here (VerySimpleQuartzApp).
The application above does not store trigger/jobs information.
You can decide if you want to save the information of your jobs/trigger in a XML file as explained here and here.
Or you can store your jobs/triggers in a database so that the scheduler - usually a windows service - can read those info and run the jobs.
If you're looking for an example where you can communicate with the server (via remoting) and schedule a job and run it, this is the one.
There are a few open source projects (managers) which you can use to start/stop or schedule jobs. Read my answer here.
Related
I have a logging issue with Serilog 2.3 inside hangfire 1.6.8.
I have a enqueued hangfire job that uses serilog to log and after some random number of jobs it will stop logging. I will re-queue those jobs and it will randomly stop logging on a different job that previously logged. When it does fail its at random points.
I have a scheduled job using serilog that logs just fine.
There are no errors in the hangfire log which is using nlog.
The jobs continue to run and the results are correct.
I'm using the appsettingsconfig sink.
Log.Logger = new LoggerConfiguration()
.ReadFrom.AppSettings(settingPrefix: "MyJob")
.CreateLogger();
I have no idea what to do or where to look.
I think hangfire creates the object and every time a job is called it calls a method on my object. Is there some odd async issue i need to handle with serilog?
Please help!
I have created a new job that only logs and it has the exact same behavior.
public class Logging
{
public Logging()
{
// Configure logging sinks
Log.Logger = new LoggerConfiguration()
.ReadFrom.AppSettings(settingPrefix: "LoggingTest")
.CreateLogger().ForContext<Logging>();
}
public void LogSomething(string something)
{
Log.Information("Log: {0}", something);
}
}
DOH!
I see now that its some sort of static issue with Serilog. With it being static, hangfire reuses objects (my ioc) and each job is logging to each others files because it all runs under hangfires app domain, so its not really stopping the log as much as the enqueued jobs run until the scheduled job runs a minute later and moves the file location and then both the enqueued and scheduled job log to the path defined for the scheduled job.
Moved the job to nlog. I guess only one file can be used and with lots of different jobs, we log each one to a different file.
So when developing an app utilizing Azure Storage Queues and a Web Job, I feel like I need some sort of health check (via API) to ensure my Azure Storage Queue is properly configured for each environment up to prod. I don't have access (directly) to view the Dashboard or Kudu.
My thought thus far was to just create an API route that returns a bool that tells me if I was able to create the queue if it doesn't exist, and peek at a message (even if one doesn't exist), like :
public async Task<bool> StorageQueueHealthCheck()
{
return await _queueManager.HealthCheck();
}
And the implementation:
public async Task<bool> HealthCheck()
{
try
{
CloudQueue queue = _queueClient.GetQueueReference(QueueNames.reportingQueue);
queue.CreateIfNotExists();
CloudQueueMessage peek = await queue.PeekMessageAsync();
return true; // as long as we were able to peek at messages
}
catch (Exception ex)
{
return false;
}
}
Is this a bad approach? Is there another way to "health check" certain Azure functionality when the dashboard is abstracted away? If I absolutely needed I would be able to view the Kudu but would rather just use an API and hit it via Swagger.
Looks good. You can also try CloudQueue.FetchAttributeAsync() since the payload would be smaller when the message size is large.
This is a good approach, please just make sure you do have a retry mechanism so that your healthcheck does not just return false for intermittent failures.
Second Approach,
Instead of api which will perform the job only is trigger there should be a console app (webjob) which does this task on regular interval (1min) and based on some logic lets say all 'creates' in last 10mins threw error sends an email. This can be used in all environments.
In the apache brooklyn web interface we would like to display some content for the sytsem managers. The content is too long to be served as a simple sensor value.
Our idea was to create a task and write the content into the output stream of the task, and then offer the REST based URL to the managers like this:
/v1/activities/{task}/stream/stdout (Of course the link masked with some nice text)
The stream and task is created like this:
LOG.info("{} Creating Activity for ClusterReport Feed", this);
activity = Tasks.builder().
displayName("clusterReportFeed").
description("Output for the Cluster Report Feed").
body(new Runnable() {
#Override
public void run() {
//DO NOTHING
}
}).
parallel(true).
build();
LOG.info("{} Task Created with Id: " + activity.getId(), this);
Entities.submit(server, activity).getUnchecked();
The task seems to be created and the interraction works perfectly fine.
However when I want to access the tasks output stream from my browser using a prepared URL I get the error that the task does not exist.
Our idea is that we are not in the right Management/Execution Context. The Web page is running in an other context compared to the entities and their sensors. How can we put a task so that it's visible for the web consoles context also.
Is it possible to write the content into a file and then offer it for download via Jetty(brooklyns web server)? That would be a much simpler way.
Many tasks in Brooklyn default to being transient - i.e. they are deleted shortly after they complete (things like effector invocations are by default non-transient).
You can mark your task as non-transient using the code below in your use of the task builder:
.tag(BrooklynTaskTags.NON_TRANSIENT_TASK_TAG)
However, note that (as of Brooklyn version 0.9.0) tasks are kept in-memory using soft references. This means the stdout of the task will likely be lost at some point in the future, when that memory is needed for other in-memory objects.
For your use-case, would it make sense to have this as an effector result perhaps?
Or could you write to an object store such as S3 instead? The S3-approach would seem best to me.
For writing it to a file, care must be taken when used with Brooklyn high-availability. Would you write to a shared volume?
If you do write to a file, then you'd need to provide a web-extension so that people can access the contents of that file. As of Brooklyn 0.9.0, you can add your own WARs in code when calling BrooklynLauncher (which calls BrooklynWebServer).
Is there a elegant way of scheduling tasks using NServiceBus. There is one way I found while searching the net. Does NServiceBus give internal APIs for scheduling.
NServiceBus now has this built in
From here http://docs.particular.net/nservicebus/scheduling-with-nservicebus
public class ScheduleStartUpTasks : IWantToRunWhenTheBusStarts
{
public void Run()
{
Schedule.Every(TimeSpan.FromMinutes(5)).Action(() =>
Console.WriteLine("Another 5 minutes have elapsed."));
Schedule.Every(TimeSpan.FromMinutes(3)).Action(
"MyTaskName",() =>
{
Console.WriteLine("This will be logged under MyTaskName’.");
});
}
}
Note the caveat
When not to use it. You can look at a scheduled task as a simple never
ending saga. But as soon as your task starts to get some logic
(if-/switch-statements) you should consider moving to a saga.
Note: This answer was valid for NServiceBus Version 2.0, but is correct no longer. Version 3 has this functionality. Go read Simon's answer, it is valid for Version 3!
NServiceBus does not have a built-in scheduling system. It is (at a very simple level) a message processor.
You can create a class that implements IWantToRunAtStartup (Run and Stop methods) and from there, create a timer or do whatever logic you need to do to drop messages onto the bus at certain times.
Other people have used Quartz.NET together with NServiceBus to get more granular scheduling functionality.
I can't see a way to see which tasks are running. There is the Task.Current property, but what if there are multiple tasks running? Is there a way to get this kind of information?
Alternately, is there a built in way to get notified when a task starts or completes?
Hey Mike, there is no public way of accessing the list of pending tasks in TPL. The mechanism that makes it available for the debugger relies on the fact that all threads will be frozen at enumeration time, therefore it can't be used at runtime.
Yes, there's a built in way to get notified whan a task completes. Check out the Task.ContinueWith APIs. Basically this API creates a new task that will fired up when the target task completes.
I'm assuming you want to do some quick accounting / progress reporting based on this, if that's the case, I'd recommend that you call task.ContinueWith() with the TaskContinuationOptions.ExecuteSynchronously flag. When you specify that the continuation action will be run right there on the same thread when the target task finishes (if you don't specify this the continuation task is queued up like any other regular task).
Hope this helps.
Huseyin
You can also get the currently running task (or a Task's parent) with reflection:
static public class Extensions
{
public static Task Parent(this Task t)
{
FieldInfo info = typeof(Task).GetField("m_parent",
BindingFlags.NonPublic | BindingFlags.Instance);
return info != null ? (Task)info.GetValue(t) : null;
}
public static Task Self
{
get
{
return Task.Factory.StartNew(
() => { },
CancellationToken.None,
TaskCreationOptions.AttachedToParent,
TaskScheduler.Default).Parent();
}
}
};
You can create a TaskScheduler class deriving from the provided one. Within that class you have full control and can add logging either side of the execution. See for example: http://msdn.microsoft.com/en-us/library/ee789351.aspx
You'll also need to use a Taskfactory with an instance of your class as the scheduler.