Intern: Execute HelperFunction for after every single functional Test - testing

I'm currently trying to execute a specific helperFunction after every testcase.
The problem with the beforeEach Function is, that the test is already flagged as successfully/passed (TestLifeCycle already finished).
Is their any configuration possibility to execute a helper Function after every testcase, without pasting it in every single test case?
I'm using the Intern Testframework with the BDD Testinterface.

The docs for the BDD interface used for Intern tests are here:
https://theintern.io/docs.html#Intern/4/api/lib%2Finterfaces%2Fbdd
You can use the afterEach() method to execute whatever you like once each test in your suite has finished.

Related

Tests retrieved from collection variable - test failures stop subsequent tests from running

I have tests that I want to use in multiple API calls.
Using JavaScript from external files has been an open issue for 6 years now but isn't officially supported (yet). I'm storing tests in collection variables so they can be retrieved in each APIs Tests.
The issue is tests that fail stop execution like a general JS failure.
Tests being stored in collection variables via an API’s Pre-req
In a setup API call I store the shared library of tests via a Pre-request Script. This is working fine.
A "normal" test failure
When a test is coded in an API's Test area, failures don't stop subsequent tests from running.
A failure for a test pulled in from a collection variable
I can pull tests from the collection variable and run them just fine. However when a Chai expectation fails it seems to be treated like a JavaScript failure instead of an test/expectation failure.
The test run fails, subsequent tests for this API don't run nor do other APIs in the collection run.
How can I have tests retrieved from a collection variable run/fail like hard coded tests?
Problem I guess is how you call function from utils
Just utils.payloadIs204, not utils.payloadIs204()
And it works for me
Update reuse with passing parametes.
Tab pre-request
pm.environment.set("abc", function print(text1, text2){
console.log(text1)
console.log(text2)
} + "");
Tab Test
let script = pm.environment.get("abc");
eval((script + "print('name', 'age')"))
The problem was in how I was calling the library function. It works as desired if the library function (with the expectations) is invoked in the anonymous function passed to pm.test(), not be the passed function.

Cypress tagged hooks with mocha

I have been building ui automation frameworks with Cypress for some time, but always using the Cypress-Cucumber-Preprocessor.
Now I need to build one without cucumber, just plain ol' mocha, but I found a problem. Seems like I can't use tagged hooks to execute code for specific tests (scenarios in Cucumber)
The scenario is basically this. I have a spec file with several tests. I have a "before" hook that seeds test data to a Mongo db, and eventually I might need to add a hook or hooks to execute something (whatever) before a specific test.
With Cucumber you have a way to tag a given scenario (#tag) and then you can create a hook that will be executed ONLY before or after that specific scenario
#tag
Scenario: Tagged scenario
Given condition
When I do this
Then I should see that
before({tag : '#tag'}, () => {
code
})
I haven't found a way to do this with mocha in Cypress... Anyone has found a way?
thx
You can use BeforeEach or Before, that does predominantly the same thing in Mocha.

With TestCafe and Electron, is there a way to execute script after the last test but before the app has shutdown?

I am using `TestCafe` to test our Electron app and need a way to know when the last test in a fixture has been executed BUT before `TestCafe` shuts our app down.
The standard hooks *(fixture.after, fixture.afterEach)* won't work. In particular, fixture.after won't work as it is called BETWEEN test runs (the test app will have been shutdown) and I need my app to still be around.
If I can get the number of tests active for this test run in the fixture I can count the runs myself and then call my custom code on the last test. If there is another way to do this that would be appreciated as well.
Any insights appreciated,
m
You can create a special 'teardown' fixture, place all necessary code into it, and pass it at the end of the test file list:
testcafe chrome tests/* teardown.js
Take a look at the testcafe-once-hook module which allows you to execute test actions once per fixture. Here is an example how to use it: https://github.com/AlexKamaev/testcafe-once-hook-example.

How to restrict test data method call for respective Test method by using TestCaseSource attribute in NUnit

I am using NUnit for the Selenium C# project. In which I have many test methods. For getting data (from excel) I am using a public static method that returns IEnumerable<TestCaseData> which I am calling at test method level as TestCaseSource. I am facing challenges now, as I start executing on the test method it is invoking all the static methods which are there in the project.
Code looks like this:
public static IEnumerable<TestCaseData> BasicSearch()
{
BaseEntity.TestDataPath = PMTestConstants.PMTestDataFolder + ConfigurationManager.AppSettings.Get("Environment").ToString() + PMTestConstants.PMTestDataBook;
return ExcelTestDataHelper.ReadFromExcel(BaseEntity.TestDataPath, ExcelQueryCreator.GetCommand(PMTestConstants.QueryCommand, PMTestConstants.PMPolicySheet, "999580"));
}
[Test, TestCaseSource("BasicSearch"), Category("Smoke")]
public void SampleCase(Dictionary<string, string> data)
{
dosomething;
}
Can someone help me how can I restrict my data call method to the respective test method?
Your TestCaseSource is not actually called by the test method when you run it, but as part of test discovery. While it's possible to select a single test to execute, it's not possible to discover tests selectively. NUnit must examine the assembly and find all the tests before it's possible to run any of them.
To make matters worse, if you are running under Visual Studio, the discovery process takes place multiple times, first before the tests are initially displayed and then again each time the tests are run. This is made necessary by the architecture of the VS Test Window, which runs separate processes for the initial disovery and the execution of the tests.
That makes it particularly important to minimize the amount of work done in test discovery, especially when running under Visual Studio. Ideally, you should structure the code so that the variable parameters are recorded during discovery. The actual data access should take place at execution time. This can be done in a OneTimeSetUp method, a SetUp method or at the start of the test itself.
Finally, I'd say that your instinct is correct: it should be possible to set up a TestCaseSource, which only runs if the test you select is about to be executed. Unfortunately, that's a feature that NUnit doesn't yet have.

How to run a Karate Feature file after a Gatling simulation completes

I have two Karate Feature files
One for creating a user (CreateUser.feature)
One for getting a count of users created (GetUserCount.feature)
I have additionally one Gatling Scala file
This calls CreateUser.feature with rampUser(100) over (5 seconds)
This works perfect. What i'd like to know is how can I call GetUserCount.feature after Gatling finishes it's simulation? It must only be called one time to get the final created user count. What are my options and how can I implement them?
Best option is use the Java API to run a single feature in the Gatling simulation.
Runner.runFeature("classpath:mock/feeder.feature", null, false)