karate-gatling: how to force a sequential execution of all existing feature files in parallel even if one of them fails? - karate

Currently I have a workflow.feature for a gatling performance test, that calls all existing functional tests in the given order. If one of the tests breaks the whole workflow is stopped.
How to force the execution of all steps even if one step fails?
Feature: A workflow of all functional tests to be executed for performance/loading tests.
Scenario: Test all functional scenarios in the given order.
* call read('classpath:foo1/bar1.feature')
* call read('classpath:foo2/bar2.feature')
* call read('classpath:foo3/bar3.feature')
...
* call read('classpath:fooX/barX.feature')
This is a manually managed list of calls, but maybe there is a way to grab all existing feature files from all subfolders dynamically?

If one of the tests breaks the whole workflow is stopped.
If you use a Scenario Outline: it processes all rows even if one fails. So maybe:
Scenario Outline:
call read('classpath:' + file)
Examples:
| file |
| foo/bar.feature |
| baz/ban.feature |
maybe there is a dynamic way to grab all existing feature files from all subfolders
You should be able to write Scala code to do this if you insist and this has nothing to do with Karate. Or the above dynamic feature may give you some ideas. Hint - you can mix Java into Karate feature files very easily.
Is there a way to force the execution of a list of features in any order so, that the next feature file is executed if the previous one is failed.
See above. Also don't ask so many questions in one, keep it simple please.

Related

Execute one feature at a time during application execution

I'm using Karate in this way; during application execution, I get the test files from another source and I create feature files based on what I get.
then I iterate over the list of the tests and execute them.
My problem is that by using
CucumberRunner.parallel(getClass(), 5, resultDirectory);
I execute all the tests at every iteration, which causes tests to be executed multiple times.
Is there a way to execute one test at a time during application execution (I'am fully aware of the empty test class with annotation to specify one class but that doesn't seem to serve me here)
I thought about creating every feature file in a new folder so that I can specify the path of the folder that contains only one feature at a time, but CucumberRunner.parallel() accepts Class and not path.
Do you have any suggestions please?
You can explicitly set a single file (or even directory path) to run via the annotation:
#CucumberOptions(features = "classpath:animals/cats/cats-post.feature")
I think you already are aware of the Java API which can take one file at a time, but you won't get reports.
Well you can try this, set a System property cucumber.options with the value classpath:animals/cats/cats-post.feature and see if that works. If you add tags (search doc) each iteration can use a different tag and that would give you the behavior you need.
Just got an interesting idea, why don't you generate a single feature, and in that feature you make calls to all the generated feature files.
Also how about you programmatically delete (or move) the files after you are done with each iteration.
If all the above fails, I would try to replicate some of this code: https://github.com/intuit/karate/blob/master/karate-junit4/src/main/java/com/intuit/karate/junit4/Karate.java

Management of TFS test plan regarding a new iteration

In TFS Test Hub, I have a reference test plan in which some hundred of test cases are ordered and sorted in a hierarchy of folders:
- FrontOffice
-- UserManagement
--- TestCase 1234
--- TestCase 5678
- BackOffice
-- etc.
When a new iteration has to be tested, I have two choices:
1- Add existing test cases in a new Test Plan, which is good, but make me lose the folder Hierarchy
2- Clone the reference test plan, which preserves the folders, but makes clones of the test cases
In this last case, the link with the requirement is second order:
Requirement --TestedBy -> ReferenceTestCase --Cloned-> ThisIterationTestCase
Option #1 is good for reporting, but tedious for execution
Option #2 is good for execution, but makes it impossible to query test results bounded to a requirement
Do you guys have any advice regarding this situation?
For your requirement, you can create test suites programmatically through REST API or client API (the structure can be defined in a JSON or xml file):
Create a test suite
The Test Management API – Part 2: Creating & Modifying Test Plans

Separating building and testing jobs in Jenkins

I have a build job which takes a parameter (say which branch to build) that, when it completes triggers a testing job (actually several jobs) which does some stuff like download a bunch of test data and checks that the new version is works with the test data.
My problem is that I can't seem to figure out a way to show the test results in a sensible way. If I just use one testing job then the test results for "stable" and "dodgy-future-branch" get mixed up which isn't what I want and if I create a separate testing job for each branch that the build job understands it quickly becomes unmanageable because of combinatorial explosion (say 6 branches and 6 different types of testing mean I need 36 testing jobs and then when I want to make a change, say to save more builds, then I need to update all 36 by hand)
I've been looking at Job Generator Plugin and ez-templates in the hope that I might be able to create and manage just the templates for the testing jobs and have the actual jobs be created / updated on the fly. I can't shake the feeling that this is so hard because my basic model is wrong. Is it just that the separation of the building and testing jobs like this is not recommended or is there some other method to allow the filtering of test results for a job based on build parameters that I haven't found yet?
I would define a set of simple use cases:
Check in on development branch triggers build
Successful build triggers UpdateBuildPage
Successful build of development triggers IntegrationTest
Successful IntegrationTest triggers LoadTest
Successful IntegrationTest triggers UpdateTestPage
Successful LoadTest triggers UpdateTestPage
etc.
So especially I wouldn't look into all jenkins job results for overviews, but create a web page or something like that.
I wouldn't expect the full matrix of build/tests, and the combinations that are used will become clear from the use cases.

Prefill new test cases in Selenium IDE

I'm using Selenium IDE 2.3.0 to record actions in my web application and create tests.
Before every test I have to clear all cookies, load the main page, log in with a specific user and submit the login form. These ~10 commands are fix and every test case needs them, but I don't want to record or copy them from other tests every time.
Is there a way to configure how "empty" test cases are created?
I know I could create a prepare.html file or something and prepend it to a test suite. But I need to be able to run either a single test or all tests at once, so every test case must include the commands.
Ok I finally came up with a solution that suits me. I wrote custom commands setUpTest and tearDownTest, so I only have to add those two manually to each test.
I used this post to get started:
Adding custom commands to Selenium IDE
Selenium supports object-oriented design. You should create a class that takes those commands that you are referring to and always executes those, in each of the tests that you are executing you could then make a call to that class and the supporting method and then execute it.
A great resource for doing this is here.

Cross browsers testing - how to ensure uniqueness of test data?

My team is new to automation and plan to automate the cross browsers testing.
Thing that we not sure, how to make sure the test data is unique for each browser’s testing? The test data need to be unique due to some business rules.
I have few options in mind:
Run the tests in sequential order. Restore database after each test completed.
The testing report for each test will be kept individually. If any error occurs, we have to reproduce the error ourselves (data has been reset).
Run the tests concurrently/sequentially. Add a prefix to each test data to uniquely identify the test data for different browser testing. E.g, FF_User1, IE_User1
Run the tests concurrently/sequentially. Several test nodes will be setup and connect to different database. Each test node will run the test using different browser and the test data will be stored in different database.
Anyone can enlighten me which one is the best approach to use? or any other suggestion?
Do you need to run every test in all browsers? Otherwise, mix and match - pick which tests you want to run in which browser. You can organize your test data like in option 2 above.
Depending on which automation tool you're using, the data used during execution can be organized as iterations:
Browser | Username | VerifyText(example) #headers
FF | FF_User1 | User FF_User1 successfully logged in
IE | IE_User1 | User IE_User1 successfully logged in
If you want to randomly pick any data that works for a test and only want to ensure that the browsers use their own data set, then separate the tables/data sources by browser type. The automation tool should have an if clause you can use to then select which data set gets picked for that test.