How to test Event Listeners in Camunda? - bpmn

I have used Execution and Task Listener in my process. How to unit test them using Junit in Camunda.

You could use for example the Camunda Model API and write a unit test to test your Execution listener.
The unit test could look like the following:
#Test
public void testEndExecutionListenerIsCalledOnlyOnce() {
BpmnModelInstance modelInstance = Bpmn.createExecutableProcess("process")
.startEvent()
.userTask()
.camundaExecutionListenerClass(ExecutionListener.EVENTNAME_END, TestingExecutionListener.class.getName())
.endEvent()
.done();
testHelper.deploy(modelInstance);
// given
ProcessInstance procInst = runtimeService.startProcessInstanceByKey("process");
TaskQuery taskQuery = taskService.createTaskQuery().processInstanceId(procInst.getId());
//when task is completed
taskService.complete(taskQuery.singleResult().getId());
// then end listener is called
// assert something for example a variable is set or something else
}
For more examples see how Camunda tests the Execution Listeners in
ExecutionListenerTest.java.

Related

Testing initial delay on CoroutineWorker with Dependencies

I know that WorkManager provides a work-testing artifact for test workers and we can use TestListenableWorkerBuilder to test CoroutineWorker (see this link for more information). I found an medium article by Ian Roberts showing how to test CoroutineWorker with dependencies by creating your own WorkerFactory.
According to official documentation, we can test initial delays on Worker using TestDriver but nothing was said about testing delays, constraints etc on CoroutinesWork. Is there a way to perform such tests in CoroutineWorker using TestListenableWorkerBuilder?
After watching this video (at 13:00) from the 2019 Android Dev Summit, I found the answer for this question:
When initializing workManager for test (by WorkManagerTestInitHelper.initializeTestWorkManager method), we have to pass our custom WorkerFactory through the configuration step;
Setup your request worker as you normally would using the OneTimeWorkRequestBuilder method;
By default, all constraints for Workmanager instances in the test
mode are unmet. Using an instance of TestDriver, we can mark those constraints as met.
Here's an example to summarize these above steps:
#Test
fun checkInitialDelay() {
val config = Configuration.Builder()
.setWorkerFactory(
MyWorkFactory(myDependencies)
)
.setMinimumLoggingLevel(Log.DEBUG)
.setExecutor(SynchronousExecutor())
.build()
// Initialize WorkManager
WorkManagerTestInitHelper.initializeTestWorkManager(context, config)
//setup the request work
val request =
OneTimeWorkRequestBuilder<MyWork>()
.setInitialDelay(10, TimeUnit.MINUTES)
.build()
val workManager = WorkManager.getInstance(context)
// Get the TestDriver
val testDriver = WorkManagerTestInitHelper.getTestDriver(context)
// Enqueue
workManager.enqueue(request).result.get()
// Tells the WorkManager test framework that initial delays are now met.
testDriver?.setInitialDelayMet(request.id)
// Get WorkInfo and outputData
val workInfo = workManager.getWorkInfoById(request.id).get()
// Assert
assert(workInfo.state == WorkInfo.State.SUCCEEDED)
}

How to Take Screenshot when TestNG Assert fails?

String Actualvalue= d.findElement(By.xpath("//[#id=\"wrapper\"]/main/div[2]/div/div[1]/div/div[1]/div[2]/div/table/tbody/tr[1]/td[1]/a")).getText();
Assert.assertEquals(Actualvalue, "jumlga");
captureScreen(d, "Fail");
The assert should not be put before your capture screen. Because it will immediately shutdown the test process so your code
captureScreen(d, "Fail");
will be not reachable
This is how i usually do:
boolean result = false;
try {
// do stuff here
result = true;
} catch(Exception_class_Name ex) {
// code to handle error and capture screen shot
captureScreen(d, "Fail");
}
# then using assert
Assert.assertEquals(result, true);
1.
A good solution will be is to use a report framework like allure-reports.
Read here:allure-reports
2.
We don't our tests to be ugly by adding try catch in every test so we will use Listeners which are using an annotations system to "Listen" to our tests and act accordingly.
Example:
public class listeners extends commonOps implements ITestListener {
public void onTestFailure(ITestResult iTestResult) {
System.out.println("------------------ Starting Test: " + iTestResult.getName() + " Failed ------------------");
if (platform.equalsIgnoreCase("web"))
saveScreenshot();
}
}
Please note I only used the relevant method to your question and I suggest you read here:
TestNG Listeners
Now we will want to take a screenshot built in method by allure-reports every time a test fails so will add this method inside our listeners class
Example:
#Attachment(value = "Page Screen-Shot", type = "image/png")
public byte[] saveScreenshot(){
return ((TakesScreenshot)driver).getScreenshotAs(OutputType.BYTES);
}
Test example
#Listeners(listeners.class)
public class myTest extends commonOps {
#Test(description = "Test01: Add numbers and verify")
#Description("Test Description: Using Allure reports annotations")
public void test01_myFirstTest(){
Assert.assertEquals(result, true)
}
}
Note we're using at the beginning of the class an annotation of #Listeners(listeners.class) which allows our listeners to listen to our test, please mind the (listeners.class) can be any class you named your listeners.
The #Description is related to allure-reports and as the code snip suggests you can add additional info about the test.
Finally, our Assert.assertEquals(result, true) will take a screen shot in case the assertion fails because we enabled our listener.class to it.

Karate start-up feature

Need to execute a 'healthcheck' test(feature) before all the test-cases execute .
This is like a preliminary test before executing the bunch of test-cases. Need a solutions to exit the platform if any of this pre check fails .
Execute your health check feature from karate-config.js using karate.call/karatecallSingle,
if you feature fails to use java System.exit to force kill your test.
snippet for karate-config.js
try{
var healthCheckInput = {};
var healthcheckCall = karate.callSingle("healthCheck.feature",healthCheckInput );
if (!<healcheckCondition>){
java.lang.System.exit(0);
}
}
catch(e){
java.lang.System.exit(0);
}
if your health check condition failed this would force exit your execution.
Not sure whether karate.abort() will give a soft exit from the platform, but if you are planning to implement try this as well.
Note: since System.exit() force kills your execution you will not get any reports properly, but you can refer console logs/karate logs
for further investigation.
EDIT:
Another Approach,
You can use karate Java API inside Junit #BeforeClass run your health status check feature.
#BeforeClass
public static void startUpCheck() {
Map<String, Object> args = new HashMap();
args.put("inputOne", "valueOne");
Map<String, Object> result = Runner.runFeature("classpath:stackoverflow/demo/healthCheck.feature", args, true);
// also assert the 'result' if you want OR keep some assertions/match in your feature
}

Programmatically execute Gatling tests

I want to use something like Cucumber JVM to drive performance tests written for Gatling.
Ideally the Cucumber features would somehow build a scenario dynamically - probably reusing predefined chain objects similar to the method described in the "Advanced Tutorial", e.g.
val scn = scenario("Scenario Name").exec(Search.search("foo"), Browse.browse, Edit.edit("foo", "bar")
I've looked at how the Maven plugin executes the scripts, and I've also seen mention of using an App trait but I can't find any documentation for the later and it strikes me that somebody else will have wanted to do this before...
Can anybody point (a Gatling noob) in the direction of some documentation or example code of how to achieve this?
EDIT 20150515
So to explain a little more:
I have created a trait which is intended to build up a sequence of, I think, ChainBuilders that are triggered by Cucumber steps:
trait GatlingDsl extends ScalaDsl with EN {
private val gatlingActions = new ArrayBuffer[GatlingBehaviour]
def withGatling(action: GatlingBehaviour): Unit = {
gatlingActions += action
}
}
A GatlingBehaviour would look something like:
object Google {
class Home extends GatlingBehaviour {
def execute: ChainBuilder =
exec(http("Google Home")
.get("/")
)
}
class Search extends GatlingBehaviour {...}
class FindResult extends GatlingBehaviour {...}
}
And inside the StepDef class:
class GoogleStepDefinitions extends GatlingDsl {
Given( """^the Google search page is displayed$""") { () =>
println("Loading www.google.com")
withGatling(Home())
}
When( """^I search for the term "(.*)"$""") { (searchTerm: String) =>
println("Searching for '" + searchTerm + "'...")
withGatling(Search(searchTerm))
}
Then( """^"(.*)" appears in the search results$""") { (expectedResult: String) =>
println("Found " + expectedResult)
withGatling(FindResult(expectedResult))
}
}
The idea being that I can then execute the whole sequence of actions via something like:
val scn = Scenario(cucumberScenario).exec(gatlingActions)
setup(scn.inject(atOnceUsers(1)).protocols(httpConf))
and then check the reports or catch an exception if the test fails, e.g. response time too long.
It seems that no matter how I use the 'exec' method it tries to instantly execute it there and then, not waiting for the scenario.
Also I don't know if this is the best approach to take, we'd like to build some reusable blocks for our Gatling tests that can be constructed via Cucumber's Given/When/Then style. Is there a better or already existing approach?
Sadly, it's not currently feasible to have Gatling directly start a Simulation instance.
Not that's it's not technically feasible, but you're just the first person to try to do this.
Currently, Gatling is usually in charge of compiling and can only be passed the name of the class to load, not an instance itself.
You can maybe start by forking io.gatling.app.Gatling and io.gatling.core.runner.Runner, and then provide a PR to support this new behavior. The former is the main entry point, and the latter the one can instanciate and run the simulation.
I recently ran into a similar situation, and did not want to fork gatling. And while this solved my immediate problem, it only partially solves what you are trying to do, but hopefully someone else will find this useful.
There is an alternative. Gatling is written in Java and Scala so you can call Gatling.main directly and pass it the arguments you need to run the Gatling Simulation you want. The problem is, the main explicitly calls System.exit so you have to also use a custom security manager to prevent it from actually exiting.
You need to know two things:
the class (with the full package) of the Simulation you want to run
example: com.package.your.Simulation1
the path where the binaries are compiled.
The code to run a Simulation:
protected void fire(String gatlingGun, String binaries){
SecurityManager sm = System.getSecurityManager();
System.setSecurityManager(new GatlingSecurityManager());
String[] args = {"--simulation", gatlingGun,
"--results-folder", "gatling-results",
"--binaries-folder", binaries};
try {
io.gatling.app.Gatling.main(args);
}catch(SecurityException se){
LOG.debug("gatling test finished.");
}
System.setSecurityManager(sm);
}
The simple security manager i used:
public class GatlingSecurityManager extends SecurityManager {
#Override
public void checkExit(int status){
throw new SecurityException("Tried to exit.");
}
#Override
public void checkPermission(Permission perm) {
return;
}
}
The problem is then getting the information you want out of the simulation after it has been run.

markTestSkipped() not working with sausage-based Selenium tests via Sauce Labs

I am using the sausage framework to run parallelized phpunit-based Selenium web driver tests through Sauce Labs. Everything is working well until I want to mark a test as skipped via markTestSkipped(). I have tried this via two methods:
setting markTestSkipped() in the test method itself:
class MyTest
{
public function setUp()
{
//Some set up
parent::setUp();
}
public function testMyTest()
{
$this->markTestSkipped('Skipping test');
}
}
In this case, the test gets skipped, but only after performing setUp, which performs a lot of unnecessary work for a skipped test. To top it off, phpunit does not track the test as skipped -- in fact it doesn't track the test at all. I get the following output:
Running phpunit in 4 processes with <PATH_TO>/vendor/bin/phpunit
Time: <num> seconds, Memory: <mem used>
OK (0 tests, 0 assertions)
The other method is by setting markTestSkipped() in the setUp method:
class MyTest
{
public function setUp()
{
if (!shouldRunTest()) {
$this->markTestSkipped('Skipping test');
} else {
parent::setUp();
}
}
protected function shouldRunTest()
{
$shouldrun = //some checks to see if test should be run
return $shouldrun;
}
public function testMyTest()
{
//run the test
}
}
In this case, setUp is skipped, but the test still fails to track the tests as skipped. phpunit still returns the above output. Any ideas why phpunit is not tracking my skipped test when they are executed in this fashion?
It looks like, at the moment, there is no support for logging markTestSkipped() and markTestIncomplete() results in PHPunit when using paratest. More accurately, PHPunit won't log tests which call markTestSkipped() or markTestIncomplete() if called with arguments['junitLogfile'] set -- and paratest calls PHPunit with a junitLogfile.
For more info, see: https://github.com/brianium/paratest/issues/60
I suppose I can hack away at either phpunit or paratest...