Running a single test suite multiple times in parallel with Serenity - testing

I have a single test suite (i.e. feature) written with Java/Appium. I want to run this test suite on several different devices (iPhone, Android phones, etc.). I want to do this in parallel as well - i.e. I want to run the same test suite as several separate threads.
How can I do this using Serenity with either Junit, Cucumber, or JBehave? I have found lots of info on how Cucumber allows multiple features to be run in parallel (here and here), but the problem is, I want to run one single feature multiple times in parallel.

Why not run them in separate jobs on your build server and pass in the device as a parameter?

Related

Spring boot test files execution ordering

I have a project with several hundred test files some of the test files use DataJpaTest annotation, some are MockMvc based controller tests and some uses mocked objects without database dependency, Based on test execution order I see context needs to be re-initialized for different flavors of test files, Is there a way to control execution of test files order so that context reload can be avoided? Say all mock tests first followed by controller tests and then DataJpaTest?
Right now test case execution taking about 30 minutes looking for way to improve the speed up test execution.
JUnit Jupiter provides options to control Test Execution Order.
However, you should look into your test setup and verify if your tests create too many Application Contexts.
Spring Test framework can cache Application Contexts and reuse them among different test suites. See Spring Test documentation for more info.

Why do my selenium xunit tests in Visual Studio run in parallel?

I'm writing unit tests using Selenium and xunit and on their own they run great but if I select all of the tests (multiple classes) in Test Explorer (they appear to be grouped by class - this is not intentional) they run in parallel. Run Tests in Parallel is not selected. Each one of my tests creates and then deletes test data so they obviously can't run in parallel. One test might delete data right after another test created that data and so the test would fail. So how can I run all of my tests and not have them run in parallel? I guess I could make them all use one partial class that spans multiple files but that's not my first choice.
I found a solution (although not an explanation). Just put [Collection("Sequential")] as the first line under each namespace. This forces everything to run sequentially.

Data driven testing using Selenium Grid

I have to execute large number of test cases in parallel using TestNG and Selenium. Each test case will be executed in different data set using Data driven testing. How to run these test cases in parallel in different machines? We can use Parallel attribute in TestNG but that is restricted to a single machine.
Can Selenium Grid tweaked and use in this purpose? If yes how to use this or any other suggestion?
I want examples of (https://www.seleniumhq.org/docs/07_selenium_grid.jsp#when-to-use-it)
To reduce the time it takes for the test suite to complete a test
pass.
Basically it's quite complicated it needs lot of understanding i haven't done it but i know that you need to create one root machine and rest machines will be childs of the parent machine then you can run the test scripts parallel but you need to make sure that those script shouldn't be dependent otherwise their will be lot of issue
I have shared the link with you so you can check how you set up?
https://medium.com/#appening/how-to-run-your-test-on-multiple-machines-using-selenium-grid-3aa37d5d2b63

Behat in Multiple Browsers in Parallel

We currently use Behat 3 to automate BDD tests for our website.
The current setup uses Jenkins to run Selenium which attaches to Firefox and uses XVFB to render (this allows us to save screenshots when anything goes wrong).
This is great for testing that the site (including JavaScript) works and that a user can perform each documented task successfully.
I am looking to expand our testing facilities, and one thing I would like to add is the ability to check multiple browsers. This is very important as we get occasional quirks that can break functionality.
Since the tests currently take slightly over an hour to run (and we have 4 suites for that site on Jenkins), I'd preferably like to run all the browsers at the same time. If I can't find a way to do it concurrently, then I likely will just set up multiple Behat profiles and run each one in series.
One thing I've been looking at as a possible solution is Ghostlab. This would allow us to test across, multiple browsers and multiple devices, including mobile, at the same time. The problem is that I can't find a way of joining this to Behat in a meaningful way.
I could run one browser connected to Ghostlab, which would cause the same actions to be taken across all connected browsers, however, were a browser other than the one controlled by Selenium to break, I do not know how we would capture that information.
TL;DR: Is there any way for me to run BDD (preferable Behat) tests across multiple browsers in parallel, and capture information from any browser that fails?
This is what multi-configuration jobs (or matrix jobs) are designed for in Jenkins.
You specify your job configuration once, but add one or more variables that should change each time, building a matrix of combinations (in your case, the matrix has one dimension: browser).
Jenkins then runs one main build with multiple sub-builds in parallel — one for each combination in the matrix. You can then clearly see the results for each combination.
This requires that your test job can be parameterised, i.e. you can choose at runtime which browser should be run, rather than running all tests together in a single job.
The Jenkins wiki has minimal documentation on this feature, but there are a few good blog posts (and Stack Overflow questions) out there on how to set it up.
A matrix job will use all available "executors" in Jenkins, to run builds in parallel as much as possible.
In a default Jenkins installation, there are two executors availble, but you can change this, or extend Jenkins by adding further build machines.

What is test harness?

I am facing some difficulties in understanding test harness and related common terms like test case, test scripts in automation testing.
So this is what I got so far:
Automation testing is the use of a special software (other than the software being tested) to control the execution of tests and compare the actual results with the expected results. It also involves the setting up of test pre-conditions. This kind of testing is most suitable for tests that are frequently carried out.
Now, I am having some problems with test harness. I read that it consists of a test suite of test cases, input files, output files, and test scripts.
Now my question is what is the difference between test case and test script?
How do you use the software to test the different functions of the Acceptance Unit Testing (AUT)? I also came across some terms like suite master and case agents.
Several broad questions there, will try to answer based on my experience.
Think of a Test Harness as an 'enabler' that actually does all the work of (1)executing tests using a (2)test library and (3)generating reports. It would require that your test scripts are designed to handle different (4)test data and (5)test scenarios. Essentially, when the test harness is in place and prerequisite data is prepared (aka data prep) someone should be able to click a button or run one command to execute all your tests and generate reports.
A test harness is most likely a collection of different things that make all of the above happen. If you wrote unit tests while developing your application, that would be part of a test harness. You would also have other tests for the functionality of your app, like: user logs in to site, sees favourites pane, recent messages and notifications. Then you add in a 'runner' of sorts that goes through all of your "test scripts" and runs them (instead of you having to execute tests one at a time). If it feels like a test harness is more of a conceptual collection rather than a single piece of software, then you're understanding this correctly :-)
Now my question is what is the difference between test case and test script?
Simple but not entirely correct answer: A Test Case defines test objectives, description, pre-conditions, steps (descriptive or specific), expected results. A Test Script would then be the actual automated script that you execute to do that test. That's in an Automation context. And it changes. A lot.
What certifications like ISTQB define as test scenarios is usually referred to as test cases in some companies and countries. In others, test cases are flipped with test scripts when referring to manual testing (when the steps are given in detail but not part of an automation harness). Others say that test scripts exclusively mean automated tests. On the other hand, one can also argue that several test cases can be combined in a test script and vice-versa. So that begs the question, how does a test procedure fit in?
A test development stage can have: "Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software."
If you assume a > (is larger than/collection of) relation, how would you relate those? Rhetorical question - that differs based on where you work, who your client is, etc. Best thing is to define it with your colleagues/clients and agree on the understanding of the terms rather than the definition. I currently go with test script = automated script, based on a pre-existing manual test case or a test scenario.
Also, how do you use the software to test the different functions of the AUT?
You write different tests to test different things. Each test does certain actions and checks if the AUT's output matches what you expected - If displayed_value == expected_value. An input file could be used to provide data for the test- list of test usernames and passwords, for instance. Or run the same test with different data - login as a different user with different messages, etc.
Take a look at RobotFramework and the Selenium. A robot framework test (written in text or html files) combined with the Selenium library would allow you to write an automated test which tests something specific...like a home page validation. You would write a separate test to ensure that a user can see all his/her messages. Another to test clearing notifications. And so on.
test harness: A test environment comprised of stubs and drivers needed to execute a test.
Test harnesses and stubs will be used to replicate the missing items (components not yet included in the tests or external systems).
Often, when small-scale Integration Testing of several modules or components is performed, it is necessary to devise or improvise methods and tools to get the test data to the components under test. This is often called a test harness. Because of the need to understand the technicalities required to build a test harness this testing is almost always done by the development team.
A test harness may facilitate the testing of components or part of a system by simulating the environment in which that test object will run. This may be done either because other components of that environment are not yet available and are replaced by stubs and/or drivers, or simply to provide a predictable and controllable environment in which any faults can be localized to the object under test. These are usually bespoke programs generated by developers to help in the testing process. If they are used in a mature organisation it is quite possible that these harnesses will be considered as ‘Test Assets’ and subject to Version Control & Configuration Management.
Test harnesses contains all the information required to compile and run a test. This includes, test cases, source files under test, stubs, and Target Deployment Port (TDP) configuration settings.
A Test Harness is the collection of all the items needed to test software at the unit, module, application or system level and provides the mechanism to execute the test. Every item such as input data, test parameters, test case, test script, expected output data, test tool, and test result report is part of the test harness.