I have many test methods added to my project. I want to test by changing the order of tests each time.
For example if i have 3 test methods.
I want to run 6 tests like
i) 1 2 3
ii) 1 3 2
iii) 2 1 3
iv) 3 1 2
v) 2 3 1
vi) 3 2 1
How can i achive this in XCode?
Note: It is in means like test 1 creates setup things for test 2. Input or UI somethings like that. Thats why i need to keep a sequence
Tests should not have any effect on state at all. And they should not depend on previous state either.
Running 1, 2, 3. Should have the exact same results as running 3, 2, 1.
Before running, each test should set up the required conditions that it is testing.
After running, each test should tear down the system so that there is nothing hanging around for the next test.
So, in answer to your question, I don't know if it's possible to specify an order... but you really shouldn't care what order they run in. If you do then that's a sign that your tests are not independent of each other.
Related
When running all of my feature files, through bamboo/maven, using the "clean test" command, how do I force the scenarios inside each feature file to run in order? On multiple threads.
For example, if I have 100 feature files, with 20 scenarios in each feature file, when I run them on with 5 threads, the order of the feature files doesn't matter, feature 10 can run before feature 15, but the scenarios inside of each feature have to run in sequential order.
I need to run feature 10 scenario 1, then feature 10 scenario 2, and so on.
So with 5 threads:
thread 1 can run feature 1
thread 2 can run feature 10
thread 3 can run feature 3
thread 4 can run feature 2
thread 5 can run feature 4
But I need each scenario 1 through 20, to execute in order.
So with 5 threads:
thread 1 feature 1 scenario 1, then scenario 2, then scenario 3, ext.
thread 2 feature 10 scenario 1, then scenario 2, then scenario 3, ext.
thread 3 feature 3 scenario 1, then scenario 2, then scenario 3, ext.
thread 4 feature 2 scenario 1, then scenario 2, then scenario 3, ext.
thread 5 feature 4 scenario 1, then scenario 2, then scenario 3, ext.
Is #parallel=false the answer? Do I really need to add that to the top of every single feature file. Like I said I could have 100 feature files in my repository, maybe more.
Or do I have to add #parallel=false on the command line? If so, would it be like the other options, "-Dparallel=false"?
If your Scenario-s are written so that they depend on each other, this is a bad-practice. Please read: https://stackoverflow.com/a/46080568/143475 very carefully.
So yes, Karate does not support a "global" switch to enable the behavior you describe. And one of the reasons is to discourage bad practices.
You will have to add #parallel=false for all features. Even this may not have the desired effect you want in the 1.0 version, because of some behavior changes: https://github.com/intuit/karate/wiki/1.0-upgrade-guide
Scenario:
There are 5 Links in the Home page:
Link 1
Link 2
Link 3
Link 4
Link 5
Each of the above links are separate test cases, so there are a total of 5 test cases.
All the links may not present in all the sites, according to the requirements.
So I need to write a Robot framework test case which works dynamically for all the sites, Like 1 site may have 3 links only some has all the 5 links. So its like SKIPPING a particular Test case if that lisk is not present.
*** Keywords ***
Go to Manage Client Reports
Click Link link:Manage Client Reports
Can anyone help.
In the upcoming Robot Framework Release 4.0 a new test status skipped will be introduced. Here is a brief status about the release:
Past due by 27 days 87% complete
Major release concentrating on adding the skip status (#3622), IF/ELSE
(#3074) and enhancing the listener API (#3296 and #3538). Last major
release to support Python 2.
So it can be ready any time soon now.
This is what you can have New SKIP status #3622. There will be a Skip If and a Skip keywords and more to be used.
How to skip tests
There are going to be multiple ways:
A special exception that library keywords can use to mark a single test to be skipped. See also #3685.
BuiltIn keyword Skip (or Skip Test and Skip Task) that utilizes the aforementioned exception.
BuiltIn keyowrd Skip If to skip based on condition.
When the skipping exception is used in a suite setup, all tests in the suite are skipped.
Command line option --skip to unconditionally skip tests based on tags. Similar to --exclude but skipped tests are shown in logs/reports
with a skip status and not dropped from execution altogether.
Command line option --skiponfailure to skip tests if they fail. Similar effect than with the current --noncritical.
What about criticality
As already discussed in #2087, the skip status is very similar feature
than Robot's current criticality concept. There are many people who
would like to have both, but I don't think that's a good idea and
believe it's better to remove criticality when skipping is added.
Separate issue #3624 covers removing criticality and explains this in
more detail. Colors
Skip status needs a specific color to match current pass (green) and
fail (red). Yellow feels like a good candidate with a traffic light
metaphor, but I'm open for other ideas and we could possibly change
other colors as well. Probably should make colors configurable too --
currently only report background colors support it.
Report background color mentioned above needs some thinking as well.
Currently it's either green or red, but with the added skip status we
could use also yellow or whatever skip color we decide to use.
Different scenarios where different colors could be used are listed
below (assuming green/yellow/red scheme):
All tests pass. This is naturally green.
Any test fails. This is naturally red.
Any test is skipped (no failures). This probably should be green but could also be yellow.
All tests skipped. This could be yellow. Could also be green but that's a bit odd if all tests are yellow.
Depending on your deadlines you might won't be able to wait this release, nevertheless it is a good to know thing.
There is an advanced solution where you can generate your test cases run-time. To do so you have to implement a small library that also acts as a listener. This way it can have a start_suite method that will be invoked and it will get the suite(s) as Python object(s), robot.running.model.TestSuite. Then you could use this object along with Robot Framework's API to create new test cases. The idea below was inspired by and it is based on this blog post: Dynamically create test cases with Robot Framework.
DynamicTestLibrary.py:
from robot.running.model import TestSuite
class DynamicTestLibrary(object):
ROBOT_LISTENER_API_VERSION = 3
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
ROBOT_LIBRARY_VERSION = 0.1
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.top_suite = None
def _start_suite(self, suite, result):
self.top_suite = suite
self.top_suite.tests.clear() # remove placeholder test
def add_test_case(self, keyword, *args):
tc = self.top_suite.tests.create(name=keyword)
tc.keywords.create(name=keyword, args=args)
globals()[__name__] = DynamicTestLibrary
UPDATE for Robot Framework 4.0
Due to the backward incompatible changes (Running and result models have been changed) made in the 4.0 release the add_test_case function should be change as below if you are using version above 4.0.
def add_test_case(self, name, keyword, *args):
tc = self.top_suite.tests.create(name=name)
tc.body.create_keyword(name=keyword, args=args)
You can utilize this library in a suite setup, in which you check which links are present and add test cases for the ones that are available.
test.robot
*** Settings ***
Library DynamicTestLibrary
Suite Setup Check Links And Generate Test Cases
*** Variables ***
##{LINKS} Manage Clients # test input 1
#{LINKS} Manage Clients Manage Client Hardware # test input 2
##{LINKS} Manage Clients Manage Client Hardware Manage Client Reports # test input 3
*** Test Cases ***
Placeholder
[Documentation] Placeholder test that will be removed during execution.
No Operation
*** Keywords ***
Check Links And Generate Test Cases
FOR ${link} IN #{LINKS}
DynamicTestLibrary.Add Test Case Go to ${link}
END
Go to Manage Client Reports
Log Many Click Link link:Manage Client Reports
Go to Manage Client Hardware
Log Many Click Link link:Manage Client Hardware
Go to Manage Clients
Log Many Click Link link:Manage Clients
Go to ${link} will give the appropriate keyword name that will be called in a test case with the same name. You can check with each example input list that the number of executed tests will be equal with the length of the list.
Here is the output:
# robot --pythonpath . test.robot
==============================================================================
Test
==============================================================================
Go to Manage Clients | PASS |
------------------------------------------------------------------------------
Go to Manage Client Hardware | PASS |
------------------------------------------------------------------------------
Test | PASS |
2 critical tests, 2 passed, 0 failed
2 tests total, 2 passed, 0 failed
==============================================================================
My JUnit report is generated in a way that all the test in the class forms the Total Test Cases. I want that the report should conatin the Total test cases as per the different mehtods. Eg. My class conatins 3 tests and currently when the report is geenarted it shows total as 16. However , I want individual test method report.
Test1 has 10 cases, so it should be 10 paased and 0 failed
Test2 has 4 cases, it should be 2 passed and 2 failed
Test3 has 2 cases, it should be 2 passed 0 failed.
Is there any way to get all the test case executed details from the multiple test cycles at a time
Currently i have some 3 Cycle IDs but I am making 3 GET API calls to get from each cycle
https://<JIRA HOST>/rest/zapi/latest/execution?projectId=<Project ID>&versionId=<Version ID>&cycleId=<Cycle ID 1>
https://<JIRA HOST>/rest/zapi/latest/execution?projectId=<Project ID>&versionId=<Version ID>&cycleId=<Cycle ID 2>
https://<JIRA HOST>/rest/zapi/latest/execution?projectId=<Project ID>&versionId=<Version ID>&cycleId=<Cycle ID 3>
Is there anyway I get all the details in one shot for Cycle ID 1 & 2 & 3
Yes, you can get execution results for multiple test cycles using Zephyr ExecutionSearchResource API and ZQL query.
Use this:
https://JIRA_HOST/restrest/zapi/latest/zql/executeSearch?zqlQuery=project="ProjectName" AND fixVersion="VersionName" AND cycleName IN ("CycleName1", "CycleName2", "Cyclename3")
The above URL facilitates to query for multiple projects or versions as well.
Reference: http://docs.getzephyr.apiary.io/#
Thanks :)
Anuj.
Let's say I have a test case with some steps in it. Now, let's say that step 3 needs to be repeated after you complete steps 4 and 5 ... so that when you do step 6 you are in the right place.
Is it good practice to tell the tester to repeat a step? Or would it be better copy-and-paste the repeated step into the step where you would need to repeat it?
I am hearing arguments that it is not industry standard to tell the tester to repeat steps and that one might not pass certain certifications if test cases are written in this manner.
Example:
*Step 1: Click the View Event Log button; Expected Results: Event Log window appears
Step 2: Close the event log window (X) or OK; Expected Results: The Event Log window disappears
Step 3: Repeat Step 1; Expected Results: Expected Results from Step 1
Step 4: Click the Cancel button; Expected Results: The Event Log window closes and any changes (such as clearing the log) are not applied
Step 5: Repeat Step 1; Expected Results: Expected Results from Step 1
Step 6: Click the Clear button and hit apply; Expected Results: The log is cleared
...*
Some people think that I should be copying-and-pasting what is in Step 1 each time I need to repeat that step rather than just simply saying that the tester should repeat the step. Any input as to industry standards, potential downfalls, etc ... would be greatly appreciated.
Test case design does not really follow an industry standard, if you were trying to get certified, listing to repeat a step is a no-no. I personally think that's crap. I see no problem asking a tester to repeat a step. As a believer in agile methodology, I prefer much simpler test cases so a tester has more time to test scenarios rather than design test cases (or a developer more time to develop if you are in a cross functional team). If your looking for more input from a larger testing community try http://www.qaforums.com/
Test cases should be as independent as possible and not verifying two outcomes in a single test case. The test cases should not be designed in a way where the tester has to repeat any previous step. In this case, a new test case should be written because it is a new path. The prons for this approach is that at the end of execution you'll have a clear picture of the test coverage and the pass/fail % of the requirements because all the test cases are independent.