Asynchronous execution of test cases in JS test driver - testing

We are having CI build servers where different JS TestCases are to be executed parallely (asynchronously) on the same instance of JS test driver server. The test cases are independent of each other and should not interfere with each other. Is this possible using JS test driver?
Currently, I have written two test cases, just to check if this is possible:
Test Case 1:
GreeterTest = TestCase("GreeterTest");
GreeterTest.prototype.testGreet = function() {
var greeter = new myapp.Greeter();
alert(“just hang in there”);
assertEquals("Hello World!", greeter.greet("World"));
};
Test Case2:
GreeterTest = TestCase("GreeterTest");
GreeterTest.prototype.testGreet = function() {
var greeter = new myapp.Greeter();
assertEquals("Hello World!", greeter.greet("World"));
};
I have added the alert statement in test case 1 to make sure that it hangs there.
I have started the JS test driver server with the following command:
java --jar JsTestDriver-1.3.5.jar --port 9877 --browser <path to browser exe>
I am starting the execution of both the test cases as follows:
java -jar JsTestDriver-1.3.5.jar --tests all --server http://localhost:9877
The Test Case 1 executes and hangs at alert statement. Test Case 2 fails with an exception (BrowserPanicException). The conf file is proper, as the second test case passes if executed by itself.
Is there any configuration changes required to make the second test case pass, while the first test case is still executing?

This issue is caused by the "app" (the jsTestDriver slave, running the tests) cannot really run them in parallel. This is because it's written in javascript, which is single threaded.
The implementation probably loops all tests and runs them one by one, thus, when an alert pops, the entire "app" is blocked, and cannot even report back to the jsTestDriver server, resulting in a timeout manifested in a BrowserPanicException.
Writing Async Tests won't help since the entire "app" is stuck.

Related

Karate API framework- test dependency

In my regression suite I have 600+ test cases. All those tests have #RegressionTest tag. See below, how I am running.
_start = LocalDateTime.now();
//see karate-config.js files for env options
_logger.info("karate.env = " + System.getProperty("karate.env"));
System.setProperty("karate.env", "test");
Results results = Runner.path("classpath:functional/Commercial/").tags("#RegressionTest").reportDir(reportDir).parallel(5);
generateReport(results.getReportDir());
assertEquals(0, results.getFailCount(), results.getErrorMessages());
I am thinking that, I can create 1 test and give it a tag #smokeTest. I want to be able to run that test 1st and only if that test passes then run the entire Regression suite. How can I achieve this functionality? I am using Junit5 and Karate.runner.
I think the easiest thing to do is run one test in JUnit itself, and if that fails, throw an Exception or skip running the actual tests.
So use the Runner two times.
Otherwise consider this not supported directly in Karate but code-contributions are welcome.
Also refer to the answers to this question: How to rerun failed features in karate?

TestCafe unable to use testController (t) outside of test run (e.g. as a conditional to skip a test)

I'm trying to check which browser we're running tests on, and then skip a test/fixture based on the result (as mentioned in this TestCafe Issue).
import { t } from 'testcafe';
fixture `test`
.page('https://testcafe.devexpress.com')
if (t.browser.name.includes('Chrome')) {
test('is Chrome?', async () => {
console.log(t.browser.name);
await t.expect(t.browser.name.includes('Chrome').ok();
});
} else {
test.skip('is Chrome?')
};
Results in...
ERROR Cannot prepare tests due to an error.
Cannot implicitly resolve the test run in the context of which the test controller action should be executed. Use test function's 't' argument instead.
Is there any way I can call the testObject (t) outside of the test?
I don't have a solution to exactly your question. But I think it's better to do it slightly differently, so the outcome will be the same, but the means to achieve it will differ a bit. Let me explain.
Wrapping test cases in if statements is, in my opinion, not a good idea. It mostly clutters test files so you don't only see test or fixture at the left side, but also if statements that make you stop when reading such files. It presents more complexity when you just want to scan a test file quickly from top to bottom.
The solution could be you introduce meta data to your test cases (could work well with fixtures as well).
test
.meta({
author: 'pavelsaman',
creationDate: '16/12/2020',
browser: 'chrome'
})
('Test for Chrome', async t => {
// test steps
});
Then you can execute only tests for Chrome like so:
$ testcafe --test-meta browser=chrome chrome
That's very much the same as what you wanted to achieve with the condition, but the code is a bit more readable.
In case you want to execute tests for both chrome and firefox, you can execute more commands:
$ testcafe --test-meta browser=chrome chrome
$ testcafe --test-meta browser=firefox firefox
or:
$ testcafe --test-meta browser=chrome chrome && testcafe --test-meta browser=firefox firefox
If your tests are in a pipeline, it would probably be done in two steps.
The better solution, as mentioned in one of the comments in this question is to use the runner object in run your tests instead of the command line. Instead of passing the browser(s) as a CLI argument, you would pass it as an optional argument to a top-level script.
You would then read the browser variable from either the script parameter or the .testcaferc.json file.
You would need to tag all tests/fixtures with the browser(s) they apply to using meta data.
You then use the Runner.filter method to add a delegate that returns true if the browser in the meta data is equal to the browser variable in the top level script
var runner = testcafe.createRunner();
var browser = process.env.npm_package_config_browser || require("./testcaferc.json").browser;
var runner.filter((testName, fixtureName, fixturePath, testMeta, fixtureMeta) => {
return fixtureMeta.browser === browser || testMeta.browser === browser ;
}

Running a main-like in a non-main package

We have a package with a fair number of complex tests. As part of the test suite, they run on builds etc.
func TestFunc(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run()
}
Now, for one of these tests, I want to introduce some kind of frontend which will make it possible for me to debug a few things. It's not really a test, but a debug tool. For this, I want to just run the same test but with a Builder pattern:
func TestFuncWithFrontend(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run().WithHTTPFrontend(":9999")
}
The test then would only start if I send a signal via HTTP from the frontend. Basically WithHTTPFrontend() just waits with a channel on a HTTP call from the frontend.
This of course would make the automated tests fail, because no such signal will be sent and execution will hang.
I can't just rename the package to main because the package has 15 files and they are used elsewhere in the system.
Likewise I haven't found a way to run a test only on demand while excluding it from the test suite, so that TestFuncWithFrontend would only run from the commandline - I don't care if with go run or go test or whatever.
I've also thought of ExampleTestFunc() but there's so much output produced by the test it's useless, and without defining Output: ..., the Example won't run.
Unfortunately, there's also a lot of initialization code at (private, i.e. lower case) package level that the test needs. So I can't just create a sub-package main, as a lot of that stuff wouldn't be accessible.
It seems I have three choices:
Export all this initialization variables and code with upper case, so that I could be using it from a sub-main package
Duplicate the whole code.
Move the test into a sub-package main and then have a func main() for the test with Frontend and a _test.go for the normal test, which would have to import a few things from the parent package.
I'd rather like to avoid the second option...And the first is better, but isn't great either IMHO. I think I'll go for the third, but...
am I missing some other option?
You can pass a custom command line argument to go test and start the debug port based on that. Something like this:
package hello_test
import (
"flag"
"log"
"testing"
)
var debugTest bool
func init() {
flag.BoolVar(&debugTest, "debug-test", false, "Setup debugging for tests")
}
func TestHelloWorld(t *testing.T) {
if debugTest {
log.Println("Starting debug port for test...")
// Start the server here
}
log.Println("Done")
}
Then if you want to run just that specific test, go test -debug-test -run '^TestHelloWorld$' ./.
Alternatively it's also possible to set a custom environment variable that you check in the test function to change behaviour.
I finally found an acceptable option. This answer
Skip some tests with go test
brought me to the right track.
Essentially using build tags which would not be present in normal builds but which I can provide when executing manually.

Protractor suites are not properly getting executed

I have multiple specs so i created a suite for different specs.
Let's take the below scenario.
this is my suite structure in the conf file.
suites:{
forms:['specs/requestE.js'],
search:['specs/findaSpec.js'],
offers:['specs/offersPrograms.js','specs/destinationsSpec.js'],
headerfooterlinks:['specs/footerlinksSpec.js','specs/headerMenuSpec.js']
},
When I run each spec individually it works correctly and generates the test results, but when I run the whole suite only the first one is working, others are not getting executed. As a result it gives timeout error.
Do you have any test cases in first spec with fit('', function(){}) instead of it('', function(){}) ?
if thats the case, it'll just execute one spec while ignoring the rest

How to get the browser set in test configuration in VSTS?

I have setup and environment to build and automate the UI test cases using selenium. We can change the test configuration to run the test cases using different browser. However I wanted to know , how can we get the configuration values used during the test run? e.g. If i have three configuration say IE, Chrome, Firefox. I want to run the selected automated test cases using the configuration set. I just need the variable name that can be used to get the configuration. E.g. $(test.rundid), is there something as $(test.configuration)?
Thanks,
Abhijit
You can configure Multi-configuration execution plan to do that:
Add a variable with configurations value (e.g. TargetBrowser=> IE, Chrome, Firefox)
Choose Multi-configuration and specify TargetBrowser as multiplers
Article: Running tests in parallel using VSTS Release Management (apply to build)
Article: Running tests in parallel using VSTS Release Management (apply to build)
Then, you can supply run time parameters to tests.
For example:
The TestRunParameters section in RunSettings File:
<TestRunParameters>
<Parameter name="browser" value="IE"/>
</TestRunParameters>
Get the corresponding value by using TestContext.Properties:
String app url=TestContext.Properties["browser"].ToString();
//TODO specify UI Test browser.
Specify the parameter in Override TestRunParameters of Visual Studio Test task:
appUrl=$(TargetBrowser)
I also couldn't find a way to call on Test Configuration variables.
But expanding on #pabrams, I've implemented a release with multiple stages, each overwriting the Pipeline Variable 'TargetEnvironment' with desired environment to test against
Release Structure with Stages
Where 'TargetEnvironment' is overwritten
And I expanded to configure on browser. I created another release-pipeline that sets a 'Browser' variable.
Where I set the variable in the new Pipeline
Lastly, I had to edit my source code to read these when the pipeline runs. I Call the following method where I set the environment:
public static string GetUrlBasedOnEnvironment()
{
switch (Environment.GetEnvironmentVariable("TargetEnvironment").ToLower())
{
case "development":
return Development.url;
case "staging":
return Staging.url;
case "preview":
return Preview.url;
case "production":
return Production.url;
default: throw new ArgumentOutOfRangeException("TargetEnvironment");
}
And here for browser:
{
public static IWebDriver getDriverBasedOnBrowser()
{
switch (Environment.GetEnvironmentVariable("Browser").ToLower())
{
case "chrome":
return new ChromeDriver(ChromeDriverService.CreateDefaultService(), new ChromeOptions(), TimeSpan.FromMinutes(5));
case "edge":
var options = new EdgeOptions();
options.UseChromium = true; //needed to test on new edge w/ chromium
return new EdgeDriver(options);
case "firefox":
return new FirefoxDriver();
default: throw new ArgumentOutOfRangeException("Browser");
}
}
}
You don't need a variable for this, you can just use different test plans or test suites. You set those to only run certain configurations, and associate each one with a different release. In each of those releases, you just pass the override parameter, specifying the browser explicitly based on test plan/suite.
If you want to get fancy, you could use variables to pass to task groups to save on duplication.