Filter Hangfire logs into separate Serilog output - hangfire

Hangfire (v1.3 +) has a 'clever' feature where it picks up your application's existing logging setup and uses it.
Starting from Hangfire 1.3.0, you are not required to do anything, if your application already uses one of the following libraries through the reflection (so that Hangfire itself does not depend on any of them).
Because I don't want hangfire logging mixed in with my application logs I would like to filter them out into a separate log file.
Serilog has filters to do this, but it needs something to filter on.
Does Hangfire include any useful context that I can specify when filtering?

I think the filter you can use will look something like:
Log.Logger = new LoggerConfiguration()
.WriteTo.ColoredConsole()
.Filter.ByIncludingOnly(Matching.FromSource("Hangfire"))
.CreateLogger();
See also this post.

I couldn't get the Serilog Matching.FromSource(...) to work, the Hangfire events don't appear to have that property. I have the following solution:
var logFile = "...";
var hangfireFile = "...";
var hangfireEvents = Matching.WithProperty<string>("Name", x => x.StartsWith("Hangfire"));
Log.Logger = new LoggerConfiguration()
.MinimumLevel.Verbose()
.WriteTo.Logger(lc =>
lc.Filter.ByExcluding(hangfireEvents)
.WriteTo.RollingFile(new CompactJsonFormatter(new SafeJsonFormatter()), logFile))
.WriteTo.Logger(lc =>
lc.Filter.ByIncludingOnly(hangfireEvents)
.WriteTo.RollingFile(new CompactJsonFormatter(new SafeJsonFormatter()), hangfireFile))
.CreateLogger();

Related

Use multiple instance of hangfire with single database

Has anyone used multiple instances of Hangfire (in different applications) with same SQL DB for configuration. So instead of creating new SQL DB for each hangfire instance i would like to share same DB with multiple instances.
As per the hangfire documentation here it is supported since v1.5 However forum discussion here and here shows we still have issues running multiple instances with same db
Update 1
So based on suggestions and documentation i configired hangfire to use queue
public void Configure(IApplicationBuilder app, IHostingEnvironment env,
ILoggerFactory loggerFactory)
{
app.UseHangfireServer(new BackgroundJobServerOptions()
{
Queues = new string[] { "instance1" }
});
}
Method to invoke
[Queue("instance1")]
public async Task Start(int requestID)
{
}
This is how i Enqueue job
_backGroundJobClient.Enqueue<IPrepareService>(x => x.Start(request.ID));
however when i check [JobQueue] table the new job has queue name default and because of that hangfire will never pickup that job because it picks up jobs for queues.
I think is a bug
Update 2
Found one more thing. I am using instance of IBackgroundJobClient. The instance is automatically get injected by .Net Core's inbuilt container.
So if i use instance to enqueue the job then hangfire creates new job with default queue name
_backGroundJobClient.Enqueue<IPrepareService>(x => x.Start(request.ID));
However if i use static method, then hangfire creates new job with configured queue name instance1
BackgroundJob.Enqueue<IPrepareService>(x => x.Start(prepareRequest.ID));
How do i configure hangfire in .Net Core so the instance of IBackgroundJobClient will use configure queue name ?
This is possible by simply setting the SQL server options with a different schema name for each instance.
Instance 1:
configuration.UseSqlServerStorage(
configuration.GetConnectionString("Hangfire"),
new SqlServerStorageOptions { SchemaName = "PrefixOne" }
);
Instance 2:
configuration.UseSqlServerStorage(
configuration.GetConnectionString("Hangfire"),
new SqlServerStorageOptions { SchemaName = "PrefixTwo" }
);
Both instances use same connection string and will create two instances of all the required tables with the prefix specified in the settings.
Queues are used for having separate queues in the same Hangfire instance. If you want to use different queues you'll need to specify the queues you want the IBackgroundJobClient to listen to and then specify that queue when creating jobs. This doesn't sound like what you're trying to accomplish.

can we use Spark sql for reporting queries in REST web services

Some basic question regarding Spark. Can we use spark only in the context of processing jobs?In our use case we have stream of positon and motion data which we can refine and save it to cassandra tables.That is done with kafka and spark streaming.But for a web user who want to view some report with some search criteria can we use Spark(Spark SQL).Or for this purpose should we restrict to cql ? If we can use spark , how can we invoke spark-sql from a webservice deployed in tomcat server.
Well, you can do it by passing a SQL request via HTML address like:
http://yourwebsite.com/Requests?query=WOMAN
At the receiving point, the architecture will be something like:
Tomcat+Servlet --> Apache Kafka/Flume --> Spark Streaming --> Spark SQL inside a SS closure
In the servlet (if you don't know what a servlet is, better look it up) in the webapplication folder in your tomcat, you will have something like this:
public class QueryServlet extends HttpServlet{
#Override
public void doGet(ttpServletRequest request, HttpServletResponse response){
String requestChoice = request.getQueryString().split("=")[0];
String requestArgument = request.getQueryString().split("=")[1];
KafkaProducer<String, String> producer;
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("acks", "all");
properties.setProperty("retries", "0");
properties.setProperty("batch.size", "16384");
properties.setProperty("auto.commit.interval.ms", "1000");
properties.setProperty("linger.ms", "0");
properties.setProperty("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
properties.setProperty("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
properties.setProperty("block.on.buffer.full", "true");
producer = new KafkaProducer<>(properties);
producer.send(new ProducerRecord<String, String>(
requestChoice,
requestArgument));
In the Spark Streaming running application (which you need to be running in order to catch the queries, otherwise you know how long it takes Spark to start), You need to have a Kafka Receiver
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(batchInt*1000));
Map<String, Integer> topicMap = new HashMap<>();
topicMap.put("wearable", 1);
//FIrst Dstream is a couple made by the topic and the value written to the topic
JavaPairReceiverInputDStream<String, String> kafkaStream =
KafkaUtils.createStream(jssc, "localhost:2181", "test", topicMap);
After this, what happens is that
You do a GET setting either the GET body or giving the argument to the query
The GET is caught by your servlet, which immediately creates, send, close a Kafka Producer (it is possible to actually avoid the Kafka Step, simply sending your Spark Streaming app the information in any other way; see SparkStreaming receivers)
Spark Streaming operates your SparkSQL code as any other submitted Spark application, but it keeps running waiting for other queries to come.
Of course, in the servlet you should check the validity of the request, but this is the main idea. Or at least the architecture I've been using

Serilog MSSqlServer sink not writing to table

I have the following statement in my Startup.cs:
Log.Logger = new LoggerConfiguration()
.MinimumLevel.Debug()
.WriteTo.ColoredConsole()
.WriteTo.MSSqlServer("Server=(localdb)\\MSSQLLocalDB;Database=myDb.Logging;Trusted_Connection=True;", "Logs", autoCreateSqlTable: true)
.WriteTo.RollingFile(pathFormat: Path.Combine(logPath, "Log-{Date}.txt"))
.CreateLogger();
And in my Configure method:
loggerFactory.AddSerilog();
When I start the application, the table is created so I know the connection works. I get logged output to the console and to the file, however, no output to the database table.
What am I failing to do?
Other information: using asp.net core rc2-final, targeting net461, and using Serilog.Sinks.MSSqlServer 4.0.0-beta-100
At first glance it doesn't look like you're missing anything. It's likely that an exception is being thrown by the SQL Server Sink when trying to write to the table.
Have you tried checking the output from Serilog's self log?
Serilog.Debugging.SelfLog.Enable(msg => Console.WriteLine(msg));
Update:
Looks like a permission issue with you SQL Server/Local DB. This error message suggests the sink is trying to run an ALTER TABLE statement and the user running the application doesn't have permission to execute an ALTER TABLE statement.
Update 2: I suggest you write a simple Console App using the full .NET + Serilog v1.5.14 + Serilog.Sinks.MSSqlServer v3.0.98 to see if you get the same behavior... Just to rule out the possibility that there's a problem with the .NET Core implementation or with the beta sink version you're using

Running hangfire single threaded "mode"

Is there any way of configuring hangfire to run single threaded? I'd like the jobs to be processed sequentially, rather than concurrently.
Something like:
app.UseHangfire(config =>
{
config.RunSingleThreaded();
config.UseServer();
});
Either this or the ability to "chain" jobs together so they happen in sequence.
Something like:
BackgroundJob
.Enqueue(() => taskContainer.PublishBatch(batchId, accountingPeriodId, currentUser, filePath))
.WithDependentJobId(23); // does not run until this job has finished...
Should have read the docs obviously...
http://docs.hangfire.io/en/latest/background-processing/configuring-degree-of-parallelism.html
To configure single thread use the BackgroundJobServerOptions type, and specify workerCount:
var server = new BackgroundJobServer(new BackgroundJobServerOptions
{
WorkerCount = 1
});
Also, it appears job chaining is a feature of Hangfire Pro version.

Correct implementation of Ninject filter binding with Web API

When binding to a Filter should I use the BindFilter extension method included in the Ninject.Web.WebApi or the new convention below, or both?
GlobalConfiguration.Configuration
.Filters.Add(new ApiValidationFilter(kernel.Get<IApiAuthenticationService>()));
I am using the latter right now but keep getting the error message below. I didn't get this in my project before the web-api filter addition.
The operation cannot be completed because the DbContext has been disposed.
I eventually had to resort to.
var apiRepository = new ApiRepository(new DatabaseFactory());
var apiAuthenticationService = new ApiAuthenticationService(apiRepository, new UnitOfWork(new DatabaseFactory()), new ValidationProvider(null));