Squish is an UI automation tool. Here I wanted to apply filters to select test cases. Or execute test cases need a condition to be run.
Squish offers few ways to run test cases from command line and have some control over them.
To run a test suite from command line:
squishrunner --testsuite /home/MyProject/suite_UI
2.To run a particular test case/cases:
squishrunner --testsuite /home/reggie/suite_addressbook [--testcase test_case_name]*
To run a particular scenario (Usage of tags)
Here you need to do some work. Let's say you want to categorise your tests between smoke test and Full Regression.
So you can use tags on top of scenarios. It quite easy.
Example:
#smoke
Scenario: To connect to the device, start the emulator
Given I am in the Start Screen
When I Click on Manual connection option
Then I should be able to connect to the device
#FullRegression
Scenario: To connect to the using using Manual connection option with connection type as Ethernet Only
Given Start Screen
When I Click on Manual connection option for Ethernet Connection
Then I should be able to connect to the sensor for Ethernet Only connection
To run all 'smoke' tags from a particular test case,
--testcase "tst_com_device_ManualConnect" --tags #smoke
Skipping tests
You can skip one or many test cases,
----skip-testcase "tst_com_device_ManualConnect" --tags #smoke
All except the ones with #smoke tag will be run.
Related
I'm using GitLab pipelines to run e2e tests on various physical machines (these machines are connected to the test hardware in a 1 to 1 relation). On each machine, a GitLab runner is installed. The pipeline consists of three major parts:
prepare the test hardware (deploy, configure)
execute the e2e tests (on the test hardware)
clean up the test hardware
Currently I'm doing all of this in one job, by using the before_script, script and after_script keywords. But I would like to use multiple jobs (or even stages) for this.
The problem I'm facing is, that I can't be sure that all jobs/stages are executed on the same runner. So it might happen, that the prepare step is executed on runner1 and the execute step is executed on runner2 (even in parallel), which obviously is not what I want. The preparation is more than just creating artifacts, therefore I can't simply give it to the next job.
Tags also seems not to solve this, because a tag can only be specified for one job, not for multiple, or the complete stage.
I understand that this is not the way how runners are used normally, but I still wonder if there is a way to achieve this.
Or can someone point out another approach to solve this?
I'm using GitLab Community Edition 14.3.2.
I think you have two options here for how you can split this up -
As sytech mentioned, you can tag each machine with machine-1, machine-2, etc, which will allow you to make your jobs sticky to each runner. Since you can use variables in runner tags, you could have a job at the start that checks which runner is not running tests, and sets RUNNER_TAG or something similar to that runner, so you don't have to hardcode your runner to a single box
You could not have the test boxes run the jobs directly (presumably you're using a shell runner to do this today), and use SSH or winRM to access the box directly, and modify it from there. Then the state of your runner doesn't matter at all. This is likely the "cleaner" way to do it, so your test boxes don't have to share resources or state with the runner
As far as I know TestCafe default behaviour is to run tests in parallel.
Indeed the browsers function accepts an array of browser (which is cool).
What I would like to do however is quite different. I have fixtures based on area of my portal (search, payment etc...) and so I'd like to know if it's possible to run these tests in CLI in parallel as they are orthogonal.
The scope is of course to improve the execution time as the number test
cases will grow.
On the other hand I'd like also to catch the failures meaning that if a test ran in parallel on a specific metadata filter fails possibly we would like to stop the others too.
I am not using TestCafe's docker but our custom one with just Firefox, Chrome installed and we launch of tests in headless mode.
As a last point a great thing would be if we could run these scenario/metadata in parallel but somehow at the end of the test suite gather the reports together.
I understand the question is not easy especially because it involves either TestCafe or GitlabCi but probably someone else faced this problem too.
Thank you
If I understand you correctly, the behavior you described can be achieved by dividing the test execution among multiple CI jobs. For example, each CI job can test a particular area of your portal. For that, run TestCafe with specified metadata of your fixture/test. Also, most of the CI systems allow you to cancel all other jobs in a pipeline if one of the jobs fails (unfortunately, Gitlab hasn't released this feature yet).
On the other hand, you can use TestCafe's programmatic API: create multiple TestCafe runners, each running the desired subset of tests. However, at the end of the test execution, you'll need to merge generated reports into one report manually. Check this answer to get an idea of how to create multiple runners.
I'm running 11 test scenarios on 3 different system all together parallely.
S1: Win7 Firefox46.0
S2: Win10 Chrome58.0
S3: Mac Safari9.0
After completion I can see the test failure in TestNG report but I can't track in which system the scenario is failed.
Is there any way so that I can track in which system or environment test failed.
How do yo execute the test cases? Do you do it in your build with CI-System, IDE?
On the selenium website https://github.com/SeleniumHQ/selenium/wiki/Grid2 is described how to surrender capabilities on the grid. You could deliver them as String variables and looking for their values in case of failing.
Maybe this could help you?
Using TestNG it can be very easy: Just put the browser name as a parameter into a data provider and print it in your stacktrace. It can be shortened like: "ch" for Chrome or "ff" for Firefox.
A control variable like can be useful for you if you decide to run a test case in another browser tommorow.
I'm trying to simulate a firefox load testing situation. I want my to test how 10 simultaneous logins would play out on my system. I already have a connected selenium grid hub and 10 open nodes.
So far, I know I can write the test case and run it 10 times which isn't what I need because it isn't automated. I also know that I can use invocation count on the test to make it run as many times as i want but this only works on the same browser node.
Does anyone have any ideas on how to automatically distribute the same test case to multiple instances of the same driver profile?
i.e. Run a login case test times on the same firefox profile open in 10 different nodes in parallel.
Gracias!
P.S. I built my tests using testNG if that matters.
Basically selenium and testNG is not for such requiurement. You should use some dedicated tool for that like jmeter.
However you can run n methods parrallel let say if you want to login with 10 dif user in 10 thread/browser you can create test data driven and configure to run method in parrallel. Make sure you are providing proper value of parrallel thread count.
How about combining threadpoolsize with invocationcount. - http://testng.org/doc/documentation-main.html#parallel-running
Grid would take care to distribute on the 10 nodes.
use headless browser like GHOST and then invoke multiple threads as ghost has no UI so it would work in your case
I am trying to create a unit test to run on two machines in Microsoft Test Manager 2010. In this test I want some client and server side test code to run simultaneously; the client side test being dependent on server side test working successfully.
When putting together a Test Suite in Test Manager, I want to be able to set both tests to have the same order value (so they run at the same time) but the validation prevents this; setting the order as shown below:
Is there any way I can achieve the simultaneous test execution I am after?
Sorry for the late answer... I've missed the notification about your answers to my question :-( Sorry for that!
In case you are still looking for solution, here my suggestion.
I suppose you have a test environment consisting of two machines (for server and client).
If so, you will not be able to run tests on both of them, or better to say you will not have enough control over running tests. Check How to Run automated tests on multiple computers at the same time
Actually I posted a related question to "Visual Studio Development Forum", you could check the answers I got here: Is it possible to run test on several virtual machines, which belong to the same environment, using build-deploy-test workflow
That all means you will end up creating two environments each consisting of one machine (one for server and one for client).
But then you will not be able to reference both environment in your build definition it you can only select one environment in DefaultLabTemplate.
That leads to the solution I can suggest:
Create two lab environments
Create three build definitions
the first one will only build your test code
the second one will deploy last successful build from the first one and start tests on the server environment
the third one will deploy last successful build from the first one and start tests on the client environment.
Run the first build definition automatically at night
Trigger the latter two simultaneously later.
It's not really nice, I know...
You will have to synchronize the build definition building the test code with the two build definitions running the tests.
I was thinking about setting up similar tests some months ago and it was the best solution I came up with...
Another option I have not tried yet could be:
Use a single test environment consisting of two machines and use different roles for them (server and client respectively).
In MTM create two Test Settings (one for the server role and one for the client role).
Create a bat file starting tests using tcm.exe tool (see How to: Run Automated Tests from the Command Line Using Tcm for more details).
You will need two tcm.exe calls, one for each Test Settings you have created.
Since a tcm.exe call just queues a test run an returns (more or less) immediately this bath file will start tests (more or less) simultaneously.
Create a build definition using DefaultLabTemplate.
This definition will:
build test code
deploy them to both machines in your environment
run your bath script as the last deployment step
(you will have to make sure this script is located on the build machine or deploy it there or make it accessible from the build machine)
As I've said, I have not tried it yet.
The disadvantage of this approach will be that you will not see the test part in the build log since the tests will not be started by means provided by DefaultLabTemplate. So the build will not fail when tests fail.
But you will still be able to see test outcomes in MTM and will have test results for each machine.
But depending on what is more important to you (having rest results or having build definition that fails if tests fail or having both) it could be a solution for you.
Yes, you can with modified TestSettings file.
http://blogs.msdn.com/b/vstsqualitytools/archive/2009/12/01/executing-unit-tests-in-parallel-on-a-multi-cpu-core-machine.aspx