How to keep Hangfire job state between runs? - hangfire

I have an instance of a class running by Hangfire as a recurring job with a heavy constructor where all initialization happens. Then recurring job calls Execute method to make work done using instantiated stuff.
Is there a way to keep this class instance between executions and not initialize from scratch?

Hangfire uses the Activate method to get an instance of your class, So if you bind your class as a singleton using a Dependency injection framework, you will able to use the same instance every time so the constructor will be called only once.
For example, in dotnet core you can bind your class in startup like this:
public void ConfigureServices(IServiceCollection services)
{
...
services.AddSingleton<MyClass>();
...
}

Related

DryIoc ASP.Net Core 3.1 DbContext store in Scope Container

I am using DryIoc (last release version) for Dependency Injection.
In my application (Asp.net Core 3.1), I am using Entity Framework.
My AppDbContext hinerits DbContext and implements IDisposable
I also use UnitOfWork pattern and the class is disposable.
These two objects are declared as Transient.
I follow the documentation of DryIoc that explains the context with Transient Disposable objects:
https://github.com/dadhi/DryIoc/blob/master/docs/DryIoc.Docs/ReuseAndScopes.md
For my AppDbContext, I resolve this service manually. Same thing for my UnitOfWork. At the end I call Dispose method.
But these two instances are not destroyed and are stored in the Singleton Scope of the DryIoc Container.
I did some tests and use JetBrain dotMemory.
My test is to call 100 times a method
Call controler
open UnitOfWork
create AppDbContext
call database to get my data
close / dispose objects.
At the end, I have 100 times my AppDbContext and my UnitOfWork in the scope of the container:
I tried a lot of combinations of creation of container but each time, it is the same thing:
var container = new Container(rules =>
rules.With(propertiesAndFields: request => request.ServiceType.Name.EndsWith("Controller") ? PropertiesAndFields.Auto(request) : null)
// .WithoutThrowOnRegisteringDisposableTransient()
// .WithTrackingDisposableTransients()
.WithoutThrowIfDependencyHasShorterReuseLifespan())
.WithDependencyInjectionAdapter(services);
Result: memory is growing up fast because of these two kind of objects stored in the scope.
If I comment .WithoutThrowOnRegisteringDisposableTransient(), my code is still working (I thought an exception would be thrown)
I tried also to declare these services as Scoped (for each http request) but it does not work because I don't create scope for each query. (Exception thrown and a scope is automatically opened per each web request by Asp .Net Core framework).
Maybe I need to dispose scope at the end of each request?
How could I force destruction of objects?
Thanks to the author of the lib, I found solution:
https://github.com/dadhi/DryIoc/issues/261

WebApplicationFactory and TestServer in integration tests for ASP.NET Core

I have two integ test classes defined as below. When I run tests in ApiControllerIT, all runs successfully. The same for FoundationControllerIT. However, when I run both together (by running the enclosing folder), tests fail.
The error message is:
Scheduler with name 'DefaultQuartzScheduler' already exists.
I have this definition in my Startup.cs file:
services.AddSingleton (IHostedService, QuartzHostedService);
So obviously this line causes the issue (if I remove this line, all testing together runs OK). So my question is - I'm a newbie from Java.. so I don't have a very good insight into .NET Core Integ test framework, but my understanding is - TestServer is created for each of test classes, e.g. One TestServer for ApiControllerIT, and the other for FoundationControllerIT. Is this incorrect? I'm just frustrated how come I'm getting a message:
Scheduler with name 'DefaultQuartzScheduler' already exists.
when I run two separate test classes?? How come the TestServers interfere each other?
public class ApiControllerIT : IClassFixture<WebApplicationFactory<Startup>>
{
private readonly WebApplicationFactory<Startup> _factory;
public ApiControllerIT(WebApplicationFactory<Startup> factory)
{
_factory = factory;
}
// tests ...
}
public class FoundationControllerIT : IClassFixture<WebApplicationFactory<Startup>>
{
private readonly WebApplicationFactory<Startup> _factory;
public FoundationControllerIT(WebApplicationFactory<Startup> factory)
{
_factory = factory;
}
// tests ...
}
I might a bit late on this but I also had this problem and it might still be useful for others in the future.
The problem comes because the WebApplicationFactory will create two instances of your Startup class. This is drastically different from your normal service start, where you only have one instance of Startup.
(It might be a bit different in your case, but I am using a Singleton instance to create and manage my Schedulers throughout my application.)
The WebApplicationFactory also calls ConfigureServices and Configure on both of them. So even your singletons will be there twice, one for each Startup instance. This is not a problem because the Startup instances will have their own ServiceProvider. It only comes to problems if (multiple) singleton instances access the same static properties of something. In our case, this is the SchedulerBuilder using SchedulerFactory using SchedulerRepository within Quartz, which is a >real< singleton and uses this code:
/// <summary>
/// Gets the singleton instance.
/// </summary>
/// <value>The instance.</value>
public static SchedulerRepository Instance { get; } = new SchedulerRepository();
This means that your independent Singleton classes still use the same SchedulerRepository within Quartz, which explains why you get the exception.
Depending on what you are testing within your tests you have some options to tackle this issue.
The SchedulerRepository has a lookup method, which you could use to check if the Scheduler was already created by another instance: public virtual Task<IScheduler?> Lookup(string schedName, CancellationToken cancellationToken = default) - So you either just take the existing Scheduler or you generate another one with a different name
Catch the exception and do nothing. (this only really makes sense if your tests do not need Quartz, which is probably unlikely but I still wanted to list this option)
I cannot tell you what makes most sense in your case as this is completely dependent on what your application does and what you want to test, but I will probably stick with one variant of option 1.

Hangfire job enqueued using interface ignores specified job filters on class/method level

Consider we have the following class:
[AutomaticRetry(Attempts = 3)]
public class EmailSender : IEmailSender
{
[ErrorReporting(Attempts = 1)]
public async Task Send()
{
}
}
public interface IEmailSender
{
Task Send();
}
And we enqueue job in this way:
backgroundJobClient.Enqueue<IEmailSender>(s => s.Send());
Just to mention, I use SimpleInjector and it's Hangfire job activator.
First of all Attempts property from AutomaticRetry attribute is not taken into account. When it comes to ErrorReporting custom attribute it is not executed at all.
Seems Hangfire checks defined attributes just on registered type (interface in my case) not the instance type that will be resolved.
In my case IEmailSender is defined in seperate project. I believe one solution would be to keep it together with EmailSender and custom attributes implementation, plus define attributes on interface level but I wouldn't like to do it in this way since my Hangfire jobs are processed in Windows Service and jobs themselves are enqueued by clients (using interfaces) so there is no need for clients to know anything about implementation.
Do you have any idea how I could solve this issue in a good way? Can we somehow configure those filters when creating BackgroundJobServer in Windows Service?
I solved it in this way:
https://gist.github.com/rwasik/80f1dc1b7bbb8b8a9b47192f0dfd4664
If you have any other ideas please let me know.

Using Non Serializable objects in Activiti BPMN

I want to use Activiti BPMN process for some database update task. My process is as follows.
Start Event-> Service Task 1 -> Service Task 2 -> Service Task 3 -> End Event
In the service implementation class of Service task 1 : I created a java.sql.Connection for MySQL database. I need to pass the same Connection object to the Service Task 2 and Service Task 3. Basically those two classes will do some insertions for the database using the same Connection object.
I tried as follows (dbConn is the the Class which contains java.sql.Connection type dbConnection)
execution.setVariable("DBConn",dbConn);
But it gives an exception since the connection object is not serializable.
"org.activiti.engine.ActivitiException: Couldn't serialize value"
So what is the best way to pass such non serializable variables between service tasks of a process? Or is there any way to define such common objects to multiple Service Tasks in one place and use them within service Tasks ( Something like global variables for the process)
You can use Thradlocal in Java to pass connection object to different service tasks.
For example use a Base class like below and extend each service task from that. Then you can set the dbConnection and use whenever required by using get method.
public class BaseServiceTask
{
public static final ThreadLocal<Connection> localConnectionContext = new ThreadLocal<Connection>();
public static void initDBConnector(Connection dbConn)
{
localConnectionContext.set(dbConn);
}
public static Connection getDBConnector()
{
return localConnectionContext.get();
}
}
Notice :
This approach assumes all service tasks are executed in the same thread, which is the case for this particular question, but once you include some user task / timer (or any asynchronous logic) this is not a viable solution anymore !
First, you should be aware that there is absolutely no way to serialize a connection instance once it got created according to this.
The reason is that a connection uses a network resource (such as a TCP/IP socket) which uses the network stack on the machine, and eventually the machine's hardware.
Which leaves you only this alternative:
Setup a bean that will store the connection instances for you, let's call it myConnectionRegistry, this bean should be scoped as singleton and injected in all your java delegates (Service task implementations)
In the first task, you create the connection and then register it into myConnectionRegistry with something like this connectionRegistry.register(conn, wfId) which would add the connection instance to a private map ....
In the subsequent tasks, you retrieve your task from that same bean using a method that fetches the connection object from the private map, and throwing an exception if no connection object was registered in the map
Have a boundary event that gets fired on that exception and do whatever is necessary to insure data integrity (the use case i described in my comment for instance)
In the last Service task, un-register your connection (you should also close it) in order to prevent memory leaks
Make sure to take into account the db pool ... etc while designing your solution!
Cheers!

Ninject - Resolve instance per method call

I'm finding a solution to resolve an instance per method call.
Something like that:
public class ServiceAPI
{
public void ServiceAction()
{
//Call certain repository action
// Ex:
Kernel.Get<RepositoryA>().Insert();
}
}
public class RepositoryA
{
public void Insert(object a)
{
//Get logger per service call ?
var logger = Kernel.Get<RepositoryA>().Insert();
}
}
I wanna the logger instance created one time per service call and it will be used throughout the repository.
I try with Ninject.Extensions.NamedScope extensions but it haven't worked yet.
Can you have any way to deal with this scenario ?
It is not possible to achieve this by using a scoping mechanism. (InCallScope(), InNamedScope(...),...).
Scoping is only relevant when ninject is calling the constructor of a type.
Ninject cannot - ever - replace the instance that is already passed to an object.
If you want to do this you have to program it yourself.
Here's two design alternatives how you can achieve what you want:
instantiate an object tree per method invocation. If there's some service infrastructure like WCF or Web-API there are probably hooks which can be used to do so.
replace the object which should be instantiated per method call by a proxy. The proxy can then use Ninject to create the target for each method call and execute the method on it.
For proxying you can use tools like Castle DynamicProxy or LinFu. There's also Ninject.Extensions.Interception which may also be helpful.