Tests retrieved from collection variable - test failures stop subsequent tests from running - testing

I have tests that I want to use in multiple API calls.
Using JavaScript from external files has been an open issue for 6 years now but isn't officially supported (yet). I'm storing tests in collection variables so they can be retrieved in each APIs Tests.
The issue is tests that fail stop execution like a general JS failure.
Tests being stored in collection variables via an API’s Pre-req
In a setup API call I store the shared library of tests via a Pre-request Script. This is working fine.
A "normal" test failure
When a test is coded in an API's Test area, failures don't stop subsequent tests from running.
A failure for a test pulled in from a collection variable
I can pull tests from the collection variable and run them just fine. However when a Chai expectation fails it seems to be treated like a JavaScript failure instead of an test/expectation failure.
The test run fails, subsequent tests for this API don't run nor do other APIs in the collection run.
How can I have tests retrieved from a collection variable run/fail like hard coded tests?

Problem I guess is how you call function from utils
Just utils.payloadIs204, not utils.payloadIs204()
And it works for me
Update reuse with passing parametes.
Tab pre-request
pm.environment.set("abc", function print(text1, text2){
console.log(text1)
console.log(text2)
} + "");
Tab Test
let script = pm.environment.get("abc");
eval((script + "print('name', 'age')"))

The problem was in how I was calling the library function. It works as desired if the library function (with the expectations) is invoked in the anonymous function passed to pm.test(), not be the passed function.

Related

How to restrict test data method call for respective Test method by using TestCaseSource attribute in NUnit

I am using NUnit for the Selenium C# project. In which I have many test methods. For getting data (from excel) I am using a public static method that returns IEnumerable<TestCaseData> which I am calling at test method level as TestCaseSource. I am facing challenges now, as I start executing on the test method it is invoking all the static methods which are there in the project.
Code looks like this:
public static IEnumerable<TestCaseData> BasicSearch()
{
BaseEntity.TestDataPath = PMTestConstants.PMTestDataFolder + ConfigurationManager.AppSettings.Get("Environment").ToString() + PMTestConstants.PMTestDataBook;
return ExcelTestDataHelper.ReadFromExcel(BaseEntity.TestDataPath, ExcelQueryCreator.GetCommand(PMTestConstants.QueryCommand, PMTestConstants.PMPolicySheet, "999580"));
}
[Test, TestCaseSource("BasicSearch"), Category("Smoke")]
public void SampleCase(Dictionary<string, string> data)
{
dosomething;
}
Can someone help me how can I restrict my data call method to the respective test method?
Your TestCaseSource is not actually called by the test method when you run it, but as part of test discovery. While it's possible to select a single test to execute, it's not possible to discover tests selectively. NUnit must examine the assembly and find all the tests before it's possible to run any of them.
To make matters worse, if you are running under Visual Studio, the discovery process takes place multiple times, first before the tests are initially displayed and then again each time the tests are run. This is made necessary by the architecture of the VS Test Window, which runs separate processes for the initial disovery and the execution of the tests.
That makes it particularly important to minimize the amount of work done in test discovery, especially when running under Visual Studio. Ideally, you should structure the code so that the variable parameters are recorded during discovery. The actual data access should take place at execution time. This can be done in a OneTimeSetUp method, a SetUp method or at the start of the test itself.
Finally, I'd say that your instinct is correct: it should be possible to set up a TestCaseSource, which only runs if the test you select is about to be executed. Unfortunately, that's a feature that NUnit doesn't yet have.

In Jest how to dynamically generate tests based on the API call response

I want to generate tests on the fly by getting the json including array of data from an API indicating what should I do in each test and how many tests I need to do.
I tried to put the fetch part in "beforeAll" but anyway it is not working because jest wants all tests (it) at execution time.

Intern: Execute HelperFunction for after every single functional Test

I'm currently trying to execute a specific helperFunction after every testcase.
The problem with the beforeEach Function is, that the test is already flagged as successfully/passed (TestLifeCycle already finished).
Is their any configuration possibility to execute a helper Function after every testcase, without pasting it in every single test case?
I'm using the Intern Testframework with the BDD Testinterface.
The docs for the BDD interface used for Intern tests are here:
https://theintern.io/docs.html#Intern/4/api/lib%2Finterfaces%2Fbdd
You can use the afterEach() method to execute whatever you like once each test in your suite has finished.

Cannot spy on Titanium.Network methods with Jasmine

I'm using Jasmine to write tests for a Titanium project. I have a custom util js to provide me information about the network availability.
In this util there is a helper method calling Titanium.Network.getNetworkType() to retrieve the current active network type. The action I do depends on the network type retrieved by this call. To ensure test coverage on this, I'm writing Jasmine tests. But unfortunately I'm having issues with spying on Titanium.Network.getNetworkType()
Code snippet:
console.log(Titanium.Network.getNetworkType()); // returns 1
spyOn(Titanium.Network, 'getNetworkType').andReturn(666);
console.log(Titanium.Network.getNetworkType()); // returns 1
Spying on a method of Titanium (e.g. getApiName()) does work. Any ideas on this?
Thanks.

How can I avoid conflicts running Selenium tests in parallel, when they must exercise an underlying REST API?

I have a web application which needs to be tested in multiple browsers in multiple environments (i.e. Chrome, Firefox, and Internet Explorer in both Windows and Linux* (*with the obvious exception of Internet Explorer)).
Tests have been written in Java using JBehave, Selenium, and SerenityBDD (Thucydides)). These tests exercise an underlying REST API, testing if objects may be successfully created and deleted using the UI.
I am using Selenium Grid, and would like to run the tests on parallel nodes; however, the concern is that as the tests exercise an underlying REST API, there could be conflicts.
Is it possible to pass in parameters to the tests as a parameter within the Jenkins job configuration which runs the tests, so that there is a slight difference in the tests dependent on the node in which they are executing? (e.g. An object named 'MYOBJECT-CHROME' is created on Chrome, versus an object named 'MYOBJECT-FIREFOX' on Firefox, meaning any REST API conflicts can be avoided?)
If the software under test(SUT) allows multithread REST API requests there is no need for you worry about
meaning any conflicts can be avoided?
The tests concurrent requests should be set up as fixtures, meaning every atomic test should set/tear down the required for it test data or return the software under test's(SUT) state. A good candidate here is a Prebuilt fixture. It'll allow you to add it as a step in Jenkins and can reduce the overhead of creating all those test objects.
If you still need to parameterize the build, you can use your suite #tags from the BDD to define which set of tests will be executed.