Log correlation id using Hangfire Filter - hangfire

I am trying to use the Hangfire Filter and ILogger.BeginScope to log the correlation id. The requirement is that each job execution will have its own correlation id so that it's easier to group the logs of the same job execution together if something happens.
My approach is that in IServerFilter.OnPerforming method, I first create a GUID, then using below code to begin the scope
logger.BeginScope(new FormattedDictionary<string, object>
{
["CorrelationId"] = correlationId
})
The subsequent log statements in the method IServerFilter.OnPerforming will have correlation id attached. But unfortunately, during job execution, the log statement won't have the correlation id scope. The ILogger instance of the job class is resolved using constructor. And the ILogger instance of the method IServerFilter.OnPerforming is resolved using a IServiceProvider.GetRequiredService method
I am wondering why so? And how can we fix this issue? I am open to other approaches of implementing logging correlation id as long as it works.

Finally figured it out. The reason being that when the local IDisposable in IServerFilter.OnPerforming created by a local ILogger instance are out of scope, since nothing reference it again, after a while, GC will collect it and thus the locally created scope will be lost. The solution is simple, using AsyncLocal<ILogger> instance on the filter to hold a reference to the created scope and only dispose it after the job finishes

Related

Camunda : Set Assignee to all UserTasks of the process instance

I have a requirement where I need to set assignee's to all the "user-tasks" in a process instance as soon as the instance is created, which is based on the candidate group set to the user-task.
i tries getting the user-tasks using this :
Collection<UserTask> userTasks = execution.getBpmnModelInstance().getModelElementsByType(UserTask.class);
which is correct in someway but i am not able to set the assignee's , Also, looks like this would apply to the process itself and not the process instance.
secondly , I tried getting it from the taskQuery which gives me only the next task and not all the user-tasks inside a process.
Please help !!
It does not work that way. A process flow can be simplified to "a token moves through the bpmn diagram" ... only the current position of the token is relevant. So naturally, the tasklist only gives you the current task. Not what could happen after ... which you cannot know, because if you had a gateway that continues differently based on the task outcome? So drop playing with the BPMN meta model. Focus on the runtime.
You have two choices to dynamically assign user tasks:
1.) in the modeler, instead of hard-assigning the task to "a-user", use an expression like ${taskAssignment.assignTask(task)} where "taskAssignment" is a bean that provides a String method that returns the user.
2.) add a taskListener on "create" to the task and set the assignee in the listener.
for option 2 you can use the camunda spring boot events (or the (outdated) camunda-bpm-reactor extension) to register one central component rather than adding a listener to every task.

runtimeservice.getVariables does not work because it can't find process instance id

I'm new to flowable and I'm trying to start a process instance with variables. params here is the Map of <String,Object> that I'm using to start the process. It all goes well, but if I try to get my variables back it tells me
"execution 22f42f67-5f88-11e9-9df0-d46d6dbfea92 doesn't exist"
But if I search for it in my process instances list, is there. This is what I do:
pi = runtimeService.startProcessInstanceById(processDefinitionId, params);
runtimeService.getVariables(pi.getId());
I'm stuck with this problem and I do not understand why it keeps doing this. What am I missing?
Flowable has the concept of RuntimeService and HistoryService. The first one contains only the runtime data (what is currently active) and the second one has all the data. The runtime data is a subset of the history data.
The reason why you can’t find the variables via the RuntimeService is due to the fact that the process is completed.
If you use the HistoryService then it would work as expected.

Activiti BPMN - How to pass username in variables/expression who have completed task?

I am very new to Activiti BPMN. I am creating a flow diagram in activiti. I m looking for how username (who has completed the task) can be pass into shell task arguments. so that I can fetch and save in db that user who has completed that task.
Any Help would be highly appreciated.
Thanks in advance...
Here's something I prepared for Java developers based on I think a blog post I saw
edit: https://community.alfresco.com/thread/224336-result-variable-in-javadelegate
RESULT VARIABLE
Option (1) – use expression language (EL) in the XML
<serviceTask id="serviceTask"
activiti:expression="#{myService.toUpperCase(myVar)}"
activiti:resultVariable="myVar" />
Java
public class MyService {
public String toUpperCase(String val) {
return val.toUpperCase();
}
}
The returned String is assigned to activiti:resultVariable
HACKING THE DATA MODEL DIRECTLY
Option (2) – use the execution environment
Java
public class MyService implements JavaDelegate {
public void execute(DelegateExecution execution) throws Exception {
String myVar = (String) execution.getVariable("myVar");
execution.setVariable("myVar", myVar.toUpperCase());
}
}
By contrast here we are being passed an ‘execution’, and we are pulling values out of it and twiddling them and putting them back.
This is somewhat analogous to a Servlet taking values we are passed in the HTMLRequest and then based on them doing different things in the response. (A stronger analogy would be a servlet Filter)
So in your particular instance (depnding on how you are invoking the shell script) using the Expression Language (EL) might be simplest and easiest.
Of course the value you want to pass has to be one that the process knows about (otherwise how can it pass a value it doesn't have a variable for?)
Hope that helps. :D
Usually in BPM engines you have a way to hook out listener to these kind of events. In Activiti if you are embedding it inside your service you can add an extra EventListener and then record the taskCompleted events which will contain the current logged in user.
https://www.activiti.org/userguide/#eventDispatcher
Hope this helps.
I have used activiti:taskListener from activiti app you need to configure below properties
1. I changed properties in task listener.
2. I used java script variable for holding task.assignee value.
Code Snip:-

Grails transactions (not GORM based but using Groovy Sql)

My Grails application is not using GORM but instead uses my own SQL and DML code to read and write the database (The database is a huge normalized legacy one and this was the only viable option).
So, I use the Groovy Sql Class to do the job. The database calls are done in Services that are called in my Controllers.
Furthermore, my datasource is declared via DBCP in Tomcat - so it is not declared in Datasource.groovy.
My problem is that I need to write some transaction code, that means to open a transaction and commit after a series of successful DML calls or rollback the whole thing back in case of an error.
I thought that it would be enough to use groovy.sql.Sql#commit() and groovy.sql.Sql#rollback() respectively.
But in these methods Javadocs, the Groovy Sql documentation clearly states
If this SQL object was created from a DataSource then this method does nothing.
So, I wonder: What is the suggested way to perform transactions in my context?
Even disabling autocommit in Datasource declaration seems to be irrelevant since those two methods "...do nothing"
The Groovy Sql class has withTransaction
http://docs.groovy-lang.org/latest/html/api/groovy/sql/Sql.html#withTransaction(groovy.lang.Closure)
public void withTransaction(Closure closure)
throws java.sql.SQLException
Performs the closure within a transaction using a cached connection. If the closure takes a single argument, it will be called with the connection, otherwise it will be called with no arguments.
Give it a try.
Thanks James. I also found the following solution, reading http://grails.org/doc/latest/guide/services.html:
I declared my service as transactional
static transactional = true
This way, if an Error occurs, the previously performed DMLs will be rolled back.
For each DML statement I throw an Error describing the message. For example:
try{
sql.executeInsert("""
insert into mytable1 (col1, col2) values (${val1}, ${val2})
""")
catch(e){
throw new Error("you cant enter empty val1 or val2")
}
try{
sql.executeInsert("""
insert into mytable2 (col1, col2) values (${val1}, ${val2})
""")
catch(e){
throw new Error("you cant enter empty val1 or val2. The previous insert is rolledback!")
}
Final gotcha! The service when called from the controller, must be in a try catch, as follows:
try{
myService.myMethod(params)
}catch(e){
//http://jts-blog.com/?p=9491
Throwable t = e instanceof UndeclaredThrowableException ? e.undeclaredThrowable : e
// use t.toString() to send info to user (use in view)
// redirect / forward / render etc
}

Unit Of Work for non-trivial CRUD operations for multiple Repositories

I have seen unit of work pattern implemented with something like a following code:
private HashSet<object> _newEntities = new HashSet<object>();
private HashSet<object> _updatedEntities = new HashSet<object>();
private HashSet<object> _deletedEntities = new HashSet<object>();
and then there are methods for adding entities to each of these HashSets.
On Commit UnitOfWork creates some Mapper instance for each entity and calls Insert, Update, Delete methods from some imaginary Mapper.
The problem for me with this approach is: the names of Insert, Update, Delete methods are hard-coded, so it seems such a UnitOfWork is capable only of doing simple CRUD operations. But what if I need the following usage:
UnitOfWork ouw = new UnitOfWork();
uow.Start();
ARepository arep = new ARepository();
BRepository brep = new BRepository();
arep.DoSomeNonSimpleUpdateHere();
brep.DoSomeNonSimpleDeleteHere();
uow.Commit();
Now the three-HashSet approach fails because I then I could register A and B entities only for Insert, Update, Delete operations but I need those custom operations now.
So it seems that I cannot always stack the Repository operations and then perform them all with UnitOfWork.Commit();
How to solve this problem? The first idea is - I could store addresses of methods
arep.DoSomeNonSimpleUpdateHere();
brep.DoSomeNonSimpleDeleteHere();
in UoW instance and execute them on uow.Commit() but then I have also to store all the method parameters. That sounds complicated.
The other idea is to make Repositories completely UoW-aware: In DoSomeNonSimpleUpdateHere I can detect that there is a UoW running and so I do not perform DoSomeNonSimpleUpdateHere but save the operation parameters and 'pending' status in some stack of the Repository instance (obviously I cannot save everything in UoW because UoW shouldn't depend on concrete Repository implementations). And then I register the involved Repository in the UoW instance. When UoW calls Commit, it opens a transaction, and calls some thing like Flush() for each pending Repository. Now every method of Repository needs some stuff for UoW detection and operation postponing for later Commit().
So the short question is - what is the easiest way to register all the pending changes in multiple repositories in UoW and then Commit() them all in a single transaction?
It would seem that even complicated updates can be broken down into a series of modifications to one or more DomainObjects. Calling DoSomeNonSimpleUpdateHere() may modify several different DomainObjects, which would trigger corresponding calls to UnitOfWork.registerDirty(DomainObject) for each object. In the sample code below, I have replaced the call to DoSomeNonSimpleUpdateHere with code that removes inactive users from the system.
UnitOfWork uow = GetSession().GetUnitOfWork();
uow.Start();
UserRepository repository = new UserRespository();
UserList users = repository.GetAllUsers();
foreach (User user in users)
{
if (!user.IsActive())
users.Remove( user );
}
uow.Commit();
If you are concerned about having to iterate over all users, here is an alternative approach that uses a Criteria object to limit the number of users pulled from the database.
UnitOfWork uow = GetSession().GetUnitOfWork();
uow.Start();
Repository repository = new UserRespository();
Criteria inactiveUsersCriteria = new Criteria();
inactiveUsersCriteria.equal( User.ACTIVATED, 0 );
UserList inactiveUsers = repository.GetMatching( inactiveUsersCriteria );
inactiveUsers.RemoveAll();
uow.Commit();
The UserList.Remove and UserList.RemoveAll methods will notify the UnitOfWork of each removed User. When UnitOfWork.Commit() is called, it will delete each User found in its _deletedEntities. This approach allows you to create arbitrarily complex code without having to write SQL queries for each special case. Using batched updates will be useful here, since the UnitOfWork will have to execute multiple delete statements instead of only one statement for all inactive users.
The fact that you have this problem suggests that you aren't using the Repository pattern as such, but something more like multiple table data gateways. Generally, a repository is for loading and saving an aggregate root. As such, when you save an entity, your persistence layer saves all the changes in that aggregate root entity instance's object graph.
If, in your code, you have roughly one "repository" per one table (or Entity), you're probably actually using a table data gateway or a data transfer object. In that case, you probably need to have a means of passing in a reference to the active transaction (or the Unit of Work) in each Save() method.
In Evans DDD book, he recommends leaving transaction control to the client of a repository, and I would agree that it's not a good practice, though it may be harder to avoid if you're actually using a table data gateway pattern.
I finally found this one:
http://www.goeleven.com/Blog/82
The author solves the problem using three Lists for update/insert/delete, but he does not store entities there. Instead repository delegates and their parameters are stored. So on Commit the author calls each registered delegate. With this approach I could register even some complex repository methods and so avoid using a separate TableDataGateway.