runtimeservice.getVariables does not work because it can't find process instance id - flowable

I'm new to flowable and I'm trying to start a process instance with variables. params here is the Map of <String,Object> that I'm using to start the process. It all goes well, but if I try to get my variables back it tells me
"execution 22f42f67-5f88-11e9-9df0-d46d6dbfea92 doesn't exist"
But if I search for it in my process instances list, is there. This is what I do:
pi = runtimeService.startProcessInstanceById(processDefinitionId, params);
runtimeService.getVariables(pi.getId());
I'm stuck with this problem and I do not understand why it keeps doing this. What am I missing?

Flowable has the concept of RuntimeService and HistoryService. The first one contains only the runtime data (what is currently active) and the second one has all the data. The runtime data is a subset of the history data.
The reason why you can’t find the variables via the RuntimeService is due to the fact that the process is completed.
If you use the HistoryService then it would work as expected.

Related

Enable Impala Impersonation on Superset

Is there a way to make the logged user (on superset) to make the queries on impala?
I tried to enable the "Impersonate the logged on user" option on Databases but with no success because all the queries run on impala with superset user.
I'm trying to achieve the same! This will not completely answer this question since it does not still work but I want to share my research in order to maybe help another soul that is trying to use this instrument outside very basic use cases.
I went deep in the code and I found out that impersonation is not implemented for Impala. So you cannot achieve this from the UI. I found out this PR https://github.com/apache/superset/pull/4699 that for whatever reason was never merged into the codebase and tried to copy&paste code in my Superset version (1.1.0) but it didn't work. Adding some logs I can see that the configuration with the impersonation is updated, but then the actual Impala query is with the user I used to start the process.
As you can imagine, I am a complete noob at this. However I found out that the impersonation thing happens when you create a cursor and there is a constructor parameter in which you can pass the impersonation configuration.
I managed to correctly (at least to my understanding) implement impersonation for the SQL lab part.
In the sql_lab.py class you have to add in the execute_sql_statements method the following lines
with closing(engine.raw_connection()) as conn:
# closing the connection closes the cursor as well
cursor = conn.cursor(**database.cursor_kwargs)
where cursor_kwargs is defined in db_engine_specs/impala.py as the following
#classmethod
def get_configuration_for_impersonation(cls, uri, impersonate_user, username):
logger.info(
'Passing Impala execution_options.cursor_configuration for impersonation')
return {'execution_options': {
'cursor_configuration': {'impala.doas.user': username}}}
#classmethod
def get_cursor_configuration_for_impersonation(cls, uri, impersonate_user,
username):
logger.debug('Passing Impala cursor configuration for impersonation')
return {'configuration': {'impala.doas.user': username}}
Finally, in models/core.py you have to add the following bit in the get_sqla_engine def
params = extra.get("engine_params", {}) # that was already there just for you to find out the line
self.cursor_kwargs = self.db_engine_spec.get_cursor_configuration_for_impersonation(
str(url), self.impersonate_user, effective_username) # this is the line I added
...
params.update(self.get_encrypted_extra()) # already there
#new stuff
configuration = {}
configuration.update(
self.db_engine_spec.get_configuration_for_impersonation(
str(url),
self.impersonate_user,
effective_username))
if configuration:
params.update(configuration)
As you can see I just shamelessy pasted the code from the PR. However this kind of works only for the SQL lab as I already said. For the dashboards there is an entirely different way of querying Impala that I did not still find out.
This means that queries for the dashboards are handled in a different way and there isn't something like this
with closing(engine.raw_connection()) as conn:
# closing the connection closes the cursor as well
cursor = conn.cursor(**database.cursor_kwargs)
My gut (and debugging) feeling is that you need to first understand the sqlalchemy part and extend a new ImpalaEngine class that uses a custom cursor with the impersonation conf. Or something like that, however it is not simple (if we want to call this simple) as the sql_lab part. So, the trick is to find out where the query is executed and create a cursor with the impersonation configuration. Easy, isnt'it ?
I hope that this could shed some light to you and the others that have this issue. Let me know if you did find out another way to solve this issue, or if this comment was useful.
Update: something really useful
A colleague of mine succesfully implemented impersonation with impala without touching any superset related, but instead working directly with the impyla lib. A PR was open with the code to change. You can apply the patch directly in the impyla src used by superset. You have to edit both dbapi.py and hiveserver2.py.
As a reminder: we are still testing this and we do not know if it works with different accounts using the same superset instance.

Camunda : Set Assignee to all UserTasks of the process instance

I have a requirement where I need to set assignee's to all the "user-tasks" in a process instance as soon as the instance is created, which is based on the candidate group set to the user-task.
i tries getting the user-tasks using this :
Collection<UserTask> userTasks = execution.getBpmnModelInstance().getModelElementsByType(UserTask.class);
which is correct in someway but i am not able to set the assignee's , Also, looks like this would apply to the process itself and not the process instance.
secondly , I tried getting it from the taskQuery which gives me only the next task and not all the user-tasks inside a process.
Please help !!
It does not work that way. A process flow can be simplified to "a token moves through the bpmn diagram" ... only the current position of the token is relevant. So naturally, the tasklist only gives you the current task. Not what could happen after ... which you cannot know, because if you had a gateway that continues differently based on the task outcome? So drop playing with the BPMN meta model. Focus on the runtime.
You have two choices to dynamically assign user tasks:
1.) in the modeler, instead of hard-assigning the task to "a-user", use an expression like ${taskAssignment.assignTask(task)} where "taskAssignment" is a bean that provides a String method that returns the user.
2.) add a taskListener on "create" to the task and set the assignee in the listener.
for option 2 you can use the camunda spring boot events (or the (outdated) camunda-bpm-reactor extension) to register one central component rather than adding a listener to every task.

SOAP UI - Set a node value in all test step's requests of all test cases in a test suites

I'm trying to set a node value in all test step's requests xml of all test cases in a test suite.
The groovy script is in the first test case and I get an error (XmlException: Unexpected Element: CDATA) as soon as the script try to edit the same tag in the second test case.
def groovyUtils = new com.eviware.soapui.support.GroovyUtils( context )
def AlltestCases = testRunner.testCase.testSuite.project.testSuites[testRunner.testCase.testSuite.name]
0.upto(AlltestCases.getTestCaseCount()) {
AlltestCases.getTestCaseList().each{
it.getTestStepList().each{ if(it.getClass()==com.eviware.soapui.impl.wsdl.teststeps.WsdlTestRequestStep){
if(it.getName().toLowerCase().contains("verify")){
step = groovyUtils.getXmlHolder("${it.getName()}"+"#Request")
step.setNodeValue("//*:Name/text()", "\$"+"{#TestSuite#NAME_ID}")
step.updateProperty()
}
}
}
}
}
If I understand your question correctly, you want to "inject" a value in a number of requests?
I would advise against that. I would rather set some project property, and then let each of the requests simply use that particular variable.
The most important reason for me to prefer this approach, is to make it more tranparent what is happening in your testcase, should someone else at some point - like if you get a different job - would need to take over your SoapUI projects. Currently you have requests, which hold values that appear to come out of nowhere. I would advise to make it clear that the request contains some sort of variable, and where that variable comes from.
Besides you will then also get more flexibility. If a few reqeusts at some point changes the path or name of the entity you want to change, you will need to make your code above handle that kind of situation. Not so, if you are merely using a variable in each of your requests.

Set variables in Javascript job entry at root level

I need to set variables in root scope in one job to be used in a different job. The first job has a Javascript job entry, with the statements:
parent_job.setVariable("customers_full_path", "C:\\customers22.csv", "r");
true;
But the compilation fails with:
Couldn't compile javascript:
org.mozilla.javascript.EvaluatorException: Can't find method
org.pentaho.di.job.Job.setVariable(string,string,string). (#2)
How to set a variable at root level in a Javascript job entry?
Sorry for the passive agressive but:
I don't know if you are new to Pentaho but, the most common mistake for new users, with previous knowledge of programming, is to be sort of 'addicted' to know methods, as such you are using JavaScript for a functionality that is built in the tool. Both Transformations(KTR) and JOBs(KJB) have a similar step, you can better manipulate this in a KTR.
JavaScript steps slow down the flow considerably, so try to stay away from those as much as possible.
EDIT:
Reading This article, seems the only thing you're doing wrong is the actual syntax of the command..
Correct usage :
parent_job.setVariable("Desired Value", [name_of_variable]);
The command you described has 3 parameters, when it should be 2. If you have more than 1 variable you need to set, use 3 times the command. Try it out see if it works.

Making sure data is loaded

I use the following command to load data.
/home/bigquery/bq load --max_bad_record=30000 -F '^' company.junelog entry.gz country:STRING,telco_name:STRING,datetime:STRING, ...
It has happened that when I got non-zero return code the data was still loaded. How do I make sure that the command is successful or not? Checking return code does not seem to help. There are times when I loaded the same file again because I got an error but the data was already available in bigquery.
You can use bq show -j of the load job and check job status.
If you are writing code to do the load, so you don't know the job id, you can pass the job id into the load operation (as long as it is unique) so you will know which job to check.
For instance you can run
/home/bigquery/bq load --job_id=some_unique_job_id --max_bad_record=30000 -F '^' company.junelog entry.gz country:STRING,telco_name:STRING,datetime:STRING, ...'
then
/home/bigquery/bq show --j some_unique_job_id
Note if you are creating new tables for every load (as opposed to appending), you could use the write disposition WRITE_EMPTY to make sure you only did the load if the table was empty, thus preventing adding the same data twice. This isn't directly supported in bq.py, but you could use the underlying bigquery_client.py to make this call, or use the REST api directly.