I created multiple test files like this:
0_setup.e2e.js
1_otherTests.e2e.js
2_loggedInTests.e2e.js
But it executed it out of order. It executed in this order:
0_setup.e2e.js
2_loggedInTests.e2e.js
1_otherTests.e2e.js
The answer is probably not. A more elaborate answer us that it is test runner dependent. But you should not be creating any dependencies or order requirements between test suites; thats a bad practice in general.
Related
I am working on the automation of test cases with the Cucumber JVM 1.2.2 framework and Selenium. Each test case corresponds to a Feature file.
I have multiple Features files organized in folders, and I need to be able to define the order of execution through a text file. For example, the text file can be like this:
file3.feauture
file1.feature
file5.feature
file2.feature
file4.feature
At this time the execution is called through a tag that is placed in the files that will be executed.
First of all, the version you are using is very old. We are currently on 4.7.4 (see: https://github.com/cucumber/cucumber-jvm).
Note that the group-id changed from "info.cukes" to "io.cucumber" with v2.
Second of all, it is recommended to have your tests be independent of each other.
From the Cucumber FAQ:
"Each scenario should be independent; you should be able to run them in any order or in parallel without one scenario interfering with another.
Each scenario should test exactly one thing so that when it fails, it fails for a clear reason. This means you wouldn’t reuse one scenario inside another scenario.
If your scenarios use the same or similar steps, or perform similar actions on your system, you can extract helper methods to do those things."
Cucumber Ran the feature file in Alphabetical order.
Rename your feature file like :
Cfile3.feauture
Afile1.feature
Efile5.feature
Bfile2.feature
Dfile4.feature
Below is the structure how my Feature Files are divide. I have created Folders based on the functionalities and then added the scenarios inside them.
Now, I have to tag few test cases among them as Smoke Test cases and get them executed.
The point here is I need a specific order for that as in eg
Add Asset
Run Test
Schedule Test
Delete Asset
Since I will add something first and then work on it and delete it at the end
I know by default Cucumber executes test cases alphabetically but that would not solve my problem.
How can I achieve that?
I am using Java
Cucumber features/scenarios are run in Alphabetical order by feature file name.
However, if you specifically specify features, they should be run in the order as declared. For example:
#Cucumber.Options(features={"automatedTestingServices.feature", "smoketest.feature"})
You can achieve by setting priority or dependency, supported in QAF which is TestNg implementation for BDD. Setting priority with scenarion should do the needful for example:
with QAF scenario in DeleteAssets.feature may look like below:
#priority:100
#or you can set dependencies like below
##dependsOnGroups:['create','schedule']
#delete #otherGroup
Scenario: Delete existing Asset
Given ...
Note: gherkin syntax doesn't supports meta-data so you need to use either qaf bdd or bdd2 syntax and appropriate factory to run tests.
Yes, you can set a priority in cucumber scenarios. but not for the whole scenarios we can do that. inside methods we have declared in step definition file, can achieve that. Just put a keyword "Order" in the step definition file over the method based on the order of the method it will run as priority.
Click here for reference
NUnit (and the like) has method attributes which allow tests to be run multiple times with different arrange values. Is something similar possible with SpecFlow?
What I am aiming for is a way to run the same scenario tests in a feature file with as many browser drivers as I can, in one test run.
You can use scenario outlines. In example of scenario outline you can mention driver name and you code logic should take action according to driver. Please see more details about scenario ouyline below
https://github.com/cucumber/cucumber/wiki/Scenario-outlines
Examples are one solution, but in your case a little cumbersome, as you have to specify them at every scenario.
In your case, please have a look at the targets feature of the SpecFlow+Runner. With that you can "multiply" your scenarios for different configurations. If you put the web driver that should be used in this configuration, you can test as many webdriver as you want.
Have a look at this example: https://github.com/techtalk/SpecFlow.Plus.Examples/tree/master/SeleniumWebTest
Full Disclosure: I am one of the developers of SpecFlow & SpecFlow+
Use scenario outlines and this tool if you want to use browsers as tags:
https://github.com/unickq/Unickq.SeleniumHelper
I am struggling a bit with the way how to write tests that reproduce an issue that has not been yet fixed.
Should one write the test and use wrong expectations and once the bug is fixed the developer will see the failure and adjust the expectations or should one just write the test with correct expectations and disable it. Once it is fixed you have to enable it again.
I would prefer the way to define wrong expectations and add the correct ones in comments and once I fix an issue I will immediately get a notification that it fails. If I disable it I won't see it failing and it will probably stay disabled until one will discover this test.
Are there any other ways doing this?
Thanks for your comments.
Martin
Ideally you would write a test that reproduces the bug and then fix said bug.
If for whatever reason that is not currently an option I would say that your approach of having the wrong expectations would be better than having an ignored test. Assuming that you use some clear variable name/ method name / comments that the test is more a placeholder and not the desired outcome.
One thing that I've done is write a test that is a "time bomb" reminder. I pick a date that is a few weeks/months out from now that I expect to be able to get back to it or have it fixed by. If I end up having to push the date out 2 or 3 times I end up deleting the test because it must not be that important.
as #Jarred said, best way is to write a test that express the correct expectations, check if it fails, then fix production code and see the test passes.
if it's not an option then remember that tests are not only to test but also to document. so write a test that document how your program does actually work. if necessary add a comment to the test. and don't write tests that are ignored - it's pointless. in future you can refactor your code many times, you could accidentally fix this test or introduce even more error in this area. writing tests that are intended to be long term ignored is just a waste of time.
don't be afraid that you will forget about that particular bug/test, just create a ticket in your issue tracking system - that's what it's made for.
if you use a testing framework that supports groups, you can add all those tests to be able to instantly exclude those test if needed.
also i really don't like the concept of 'time bomb tests'. your build MUST be reproducible - that's the fundamental assumption of release management, continuous integration, ability to pass your code to another team etc. tests are not meant to track and remind about the issues, it's the job of the issue tracking system. seriously, don't do it
Actually I thought about this again. We are using JUnit and it supports defining expectations on exceptions via #Test(expected=Exception.class).
So what one can do is write the test with the desired expectations and define the test with #Test(expected=AssertionError.class). Once the test will be fixed the test starts failing and the developer has to remove the expectation.
I am writing step definitions for scenario's described in Cucumber.
So If I am testing a scenario of liking a comment on a post.
Should I make sure that there will be a comment and a post in the first place in the steps only?
Or my test should catch such scenario and give a message in the log ?
I am using Cucumber-JVM as of now.
Well typically a good test technique is to test positives. It's a good idea to write your test to the future feature that will come out. In your case, write your selenium test to make sure that you are able to like your comment on a post.
Ideally, when that feature comes out, the test will already be written, and will go from a failing state to a passing state.
Even more ideally, it's a good idea to have a seperate dvl environment from your production, that way you can test on your dvl environment, and then just point your tests to production once that feature is released.