How to get results from a completed process instance in Flowable? - flowable

I have a simple Service task that sets a variable "foo" to "bar".
When a process contains just that one task, and I initiate it using "runtime/process-instances", I can see variable "foo" in the response.
When I add a user task before the service task, and finish the task using action: complete on "runtime/tasks", I just get a 200 result code.
How do I get the resulting variables?

Flowable has 2 sets of services
RuntimeService - which provides information for the runtime data.
HistoryService - which provides information for all the available data (runtime and completed)
In order to access completed tasks / processes you'll need to use the history service. Those endpoints are under /history

Related

Counting the number of response codes in JMeter 4.0

I run some load tests (all endpoints) and we do have a known issue in our code: if multiple POST requests are sent in the same time we do get a duplicate error based on a timestamp field in our database.
All I want to do is to count timeouts (based on the message received "Service is not available. Request timeout") in a variable and accept this as a normal behavior (don't fail the tests).
For now I've added a Response Assertion for this (in order to keep the tests running) but I cannot tell if or how many timeout actually happen.
How can I count this?
Thank you
I would recommend doing this as follows:
Add JSR223 Listener to your Test Plan
Put the following code into "Script" area:
if (prev.getResponseDataAsString().contains('Service is not available. Request timeout')) {
prev.setSampleLabel('False negative')
}
That's it, if sampler will contain Service is not available. Request timeout in the response body - JMeter will change its title to False negative.
You can even mark it as passed by adding prev.setSuccessful(false) line to your script. See Apache Groovy - Why and How You Should Use It article fore more information on what else you can do with Groovy in JMeter tests
If you just need to find out the count based on the response message then you can save the performance results in a csv file using simple data writer (configure for csv only) and then filter csv based on the response message to get the required count. Or you can use Display only "errors" option to get all the errors and then filter out based on the expected error message.
If you need to find out at the runtime then you can use aggregate report listener and use "Errors" checkbox to get the count of failure but this will include other failures also.
But, if you need to get the count at the run time to use it later then it is a different case. I am assuming that it is not the case.
Thanks,

Jmeter: variable scope - How to use different random value for the same request

I'm willing to use 2 variables for random values with the same request.
I defined both in User Parameters as follows: var1=${__Random(1,100)}; var2=${__Random(1000,2000)} (Also I checked: Update once per iteration)
I have the requests:
Request1: GET user/${var1}
Request2: GET user/${var2}
During run-time, when it gets to request2 var2 equals var1!
How do I fix that?
Well, User Parameters is a PreProcessor so you should put it as a child of your HTTP Request in order to get correct behavior. You can use Debug Sampler and View Results Tree listener combination to validate variables values (see How to Debug your Apache JMeter Script article for more details)
I would recommend discarding this User Parameters and injecting the __Random() function directly into your HTTP Request sampler Path like
/user/${__Random(1,100,var1)}
/user/${__Random(1000,2000,var2)}
This is a simpler way to generate random numbers and get them stored into JMeter Variables.

Mule Salesforce Create

With Mule Salesforce Connector using sfdc:create, the document says we can send up to 200 records at a time (single round trip call). If that's the case, what benefit do we get using Mule Batch flow with Batch Commit and Salesforce (sfdc:create) within the Batch Commit?
Example create below.
<xml<sfdc:create type="Account">
<sfdc:objects>
<sfdc:object>
<Name>MuleSoft</Name>
<BillingStreet>I live here </BillingStreet>
<BillingCity>My City</BillingCity>
<BillingState>MA</BillingState>
<BillingPostalCode>32423</BillingPostalCode>
<BillingCountry>US</BillingCountry>
</sfdc:object>
.......200 such objects
</sfdc:objects>
</sfdc:create>
Please keep in mind that in Salesforce, the SOAP API Call Limit for a client application is up to 200 records in a single create() call. If a create request exceeds 200 objects, then the entire operation fails.
Please refer reference from Salesforce.

How to test if my application does not send a message with SoapUI?

I have a Java EE application that processes a bunch of messages from different interfaces. (Some of the) functional tests are performed with SoapUI.
In one of the cases I have created a SOAP VirtResponse step that receives the output of my application and checks the values in the received message. The test has the following steps:
Datasource step to load input and expected output (multiple scenarios)
JMS step to send input to application on a specific interface.
SOAP step to receive application output on another interface (no validation).
Groovy script to check results (e.g. no message received or message received with specific values). See below for the script, couldn't get it to work in between the list items.
Datasource loop to step 2.
There is a scenario (well, there are more scenarios) in which the input should not generate an output. I want to check if my application did not send an output in such a scenario.
Strategy 1:
I have added a fourth groovy step in which I validate that the result of step 3 is an empty string. To make the test pass, I had to disable the checkbox in TestCase Options that says "Fail TestCase on Error". This works in cases of happy execution of tests. However if an error does occur (e.g. the application did send a response when it was not supposed to or the application send a wrong response), the entire TestCase is set to passed (because of the checkbox) and only the specific step deep down in the logs is failed. This makes it hard to see the results of the entire test suite.
Attempted strategy 2:
Started out by adding a conditional test step that will skip step 3 based on the input. However that way I no longer validate if my application does not send a message when it is not supposed to.
What is the best way to check these kinds of scenarios?
EDITS:
The entire testcase should fail if one of the scenarios from the datasource fails. (It is not a problem if this means that some scenarios were not evaluated yet)
Groovy script:
// Get flag from datasource that indicates if a message should be received
def soapBerichtOntvangen = context.expand('${DataSourceUISBerichten#SoapBerichtOntvangen}' );
// Get the message from the previous step.
def receivedSoapRequest = context.expand( '${SOAPVirtResponse#Request#declare namespace out=\'http://application/messageprocessing/outbound/out:SendOutboundMessage[1]/Message[1]}' )
// If we don't expect a message the flag is set to "N"
if(soapBerichtOntvangen=="N"){
assert(receivedSoapRequest=="")
} else if(receivedSoapRequest!=null && receivedSoapRequest!=""){
def slurpedMessage = new XmlSlurper().parseText(receivedSoapRequest)
def messageType=slurpedMessage.MessageHeader.MessageReference.MessageType
// Get expected values from context
def verwachtMessageType = context.expand('${DataSourceOutboundCIBerichten#messageType}' )
assert(String.valueOf(messageType)==verwachtMessageType)
} else {
// Should have received a message, but none came up.
assert(false)
}

Batch Request to Google API making calls before the batch executes in Python?

I am working on sending a large batch request to the Google API through Admin SDK which will add members to certain groups based upon the groups in our in-house servers(a realignment script). I am using Python and to access the Google API I am using the [apiclient library][1]. When I create my service and batch object, the creation of the service object requests a URL.
batch_count = 0
batch = BatchHttpRequest()
service = build('admin', 'directory_v1')
logs
INFO:apiclient.discovery:URL being requested:
https://www.googleapis.com/discovery/v1/apis/admin/directory_v1/rest
which makes sense as the JSON object returned by that HTTP call is used to build the service object.
Now I want to add multiple requests into the batch object, so I do this;
for email in add_user_list:
if batch_count != 999:
add_body = dict()
add_body[u'email'] = email.lower()
for n in range(0, 5):
try:
batch.add(service.members().insert(groupKey=groupkey, body=add_body), callback=batch_callback)
batch_count += 1
break
except HttpError, error:
logging_message('Quota exceeded, waiting to retry...')
time.sleep((2 ** n)
continue
Every time it iterates through the outermost for loop it logs (group address redacted)
INFO:apiclient.discovery:URL being requested:
https://www.googleapis.com/admin/directory/v1/groups/GROUPADDRESS/members?alt=json
Isn't the point of the batch to not send out any calls to the API until the batch request has been populated with all the individual requests? Why does it make the call to the API shown above every time? Then when I batch.execute() it carries out all the requests (which is the only area I thought would be making calls to the API?)
For a second I thought that the call was to construct the object that is returned by .members() but I tried these changes:
service = build('admin', 'directory_v1').members()
batch.add(service.insert(groupKey=groupkey, body=add_body), callback=batch_callback)
and got the same result still. The reason this is an issue is because it is doubling the number of requests to the API and I have real quota concerns at the scale this is running at.