I have been developing a selenium/Java-based test automation in my company as a beginner. I am using the page object model and TestNG testing framework. Till now I have written test scripts with respect to the user (role: admin). Now I have to test the application functionality based on different users type e.g technician, support team, service assistance, and so on.
Some users have the same permissions in the application as admin users. E.g Test case: create an invoice (service assistance and service advisors also have permission to create invoices as admin). Test case script for creation of invoice I have already created but I don't want to create the same script again and again for each user role. So I want some suggestions on how can I manage this type of situation. If someone provides some documentation, or project example it will be great help for me. Also, I want to know how can I manage these types of test cases in different test suits.
I thought of some of the solutions:
Adding user role parameter in each test case/class
using TestNg Groups
You might do it in a next way:
1 Add role parameter to your test configuration when you authorize the user. It might be e.g #BeforeTest method. Annotate your method with #Parameter, and also set the value in testng.xml.
This tutorial may help:
https://www.toolsqa.com/testng/testng-parameters/
2 Create several TestNG xml suites. For a big project, I suggest creating a separate suite.xml per user role with all the test classes and packages you need. Then you'll be able to create the main TestNG xml and refer to other xmls with suite-file.
See https://stackoverflow.com/a/31851469/5226491
You can organize the suite to stay using a single xml, it depends on what you prefer.
How to group tests
I suggest do not mix test methods for multiple roles within a single class. Try to split them into separate classes.
This suggestion induces to keep test structure simple.
The main idea:
define the role before the test class run, and do not switch it when the class run is in progress.
Let's imagine:
you have roles A, B, C.
and test methods: method1Test, ...method7Test.
And
method1Test, method2Test, method3Test work for all roles.
method4Test, method5Test work for all roles A, B
method6Test, method7Test work for all roles B, C
So, this suggests to split this methods on 3 classes:
Class1Test with method1Test, method2Test, method3Test
Class2Test with method4Test, method5Test
Class3Test with method6Test, method7Test
Eventually, you'll have the ability to define the classes used per a unique role, and also to reuse some classes for multiple roles.
Example for single TestNG xml file.
<suite name="test-suite">
<test name="roleA-suite">
<parameter name="roleName" value="roleA"/>
<classes>
<class name="Class1Test"/>
<class name="Class2Test"/>
</classes>
</test>
<test name="roleB-suite">
<parameter name="roleName" value="roleB"/>
<classes>
<class name="Class1Test"/>
<class name="Class2Test"/>
<class name="Class3Test"/>
</classes>
</test>
<test name="roleC-suite">
<parameter name="roleName" value="roleC"/>
<classes>
<class name="Class1Test"/>
<class name="Class3Test"/>
</classes>
</test>
</suite>
Using Groups
It's also possible for this task, but I personally don't like using groups, IMHO they introduce a lot of complexity and it might be not easy to combine this with other TestNG features, e.g. with setting priority on tests, or with the depends-on feature. So maybe other users will share the positive experience using groups.
Anyway, you might try to use a groups approach and choose the best, you like.
Look at this tutorial:
https://www.lambdatest.com/blog/grouping-test-cases-in-testng/
Related
I'm having a testsuite(4 classes) which have common login for all classes. When I run the test suite, with first class alone having login functionality and commenting the login code for rest of the 3 classes(redirecting the url ), its running only the first class, rest of the 3 classes got failed!
<classes>
<class name="testcases.TestClass1"></class>
<class name="testcases.TestClass2"></class>
<class name="testcases.TestClass3"></class>
<class name="testcases.TestClass4"></class>
</classes>
Please help on this
You can try using before suite annotation available in TestNG.
#BeforeSuite: The annotated method will be run before all tests in this suite have run.
Create the driver instance as part of before suite and access it across.
Refer to TestNG documentation for more information on the annotation.
In xUnit, is there a way to get access to the current trait filter during the test execution? For example, during our build process we setup a task to run our tests with a given trait (i.e. browser type). In our test setup code, I would like to know if the requested test run has received a trait filter, so I can use that to determine what Selenium web driver to use for that test run.
Thanks in advance for your assistance.
I am working on a project in which we keep one wiki platform in sync with the content of other. The way we do this is a document edit on 'Wiki A' kicks of a data flow pipeline that transforms data from format of of 'Wiki A' to format of 'Wiki B' and sends this data to 'Wiki B' for import.
I have 3 components.
'Wiki A' which is in PHP
Translation Service which is a Ruby-on-rails service
'Wiki B' which is in Java
I want to build an automated end-to-end testing framework which should ideally be able to test the following: The main need for the testing is my unit tests for each product cannot test the communication between the products and do not test the whole end-to-end data flow.
Edit a page on 'Wiki A'
Test that it kicks of the data flow
Test that the TranslationService transformed the data
Test that 'Wiki B' imports the transformed data
Based on initial research, my options are a recording tools such as Selenium. Selenium can handle the multiple products I want to test, but from what I have seen the tests are fragile.
The other option is some development testing tool like Cucumber/Capybara with which I can write robust tests, but I am not sure how it works in a multiple product architecture, each written in a different language.
Am I looking at it in the correct way? Am I too ambitious to attempt a singular end-to-end testing framework spanning multiple products?
It is possible to write end-to-end tests spanning multiple products written in different languages as long as the products provide some kind of proper interface. Ideally this is some messaging interface (e.g. Http REST). I would suggest to use the Wiki interface directly instead of accessing the UI over the browser.
I assume that 'Wiki A' provides such an interface for adding and changing content. Your integration test first of all uses this interface in order to change some data and trigger the whole process. Then you need to make sure that the content change has been processed. You can do that by verifying the change in 'Wiki B'. Also ideally 'Wiki B' offers some kind of interface to get some content, too. So your test should just use the messaging interfaces of 'Wiki A' and 'Wiki B'.
1) Trigger 'Wiki A' change
2) Verify content on 'Wiki B'
Maybe you need to wait some time between step 1 and 2 for the translation and import. You can write these kind of integration tests fully automated with test frameworks like Citrus (http://citrusframework.org)
I have some test scenarios and cases written in Specflow/Selenium in Visual Studio, usin MsTest. I just want to associate them to Microsoft Test Manager, so a test case written there is associated to an automated test.
Is that possible? How?
More Data: test were created by using Scenario Outline with some lines of examples.
You can associate testcases to a workitem in TFS/MTM, but we found it to cumbersome to do: It is a manual action in MTM that references the TestMethod by name. But because the TestMethod is generated by specflow by combining the title of the Scenario Outline and the first column of your Examples table, it is difficult to maintain:
Whenever a Scenario Outline title is changed, or the term in the first column of the examples table is changed, you have to re-couple the TestMethods to the workitems
When you add new Examples or Scenarios to your feature, you have to remember to link them to the workitem, one-by-one
To find the correct TestMethod in the dll is nearly undoable when you approach the thousandish scenarios.
What we did was using the WorkItem attribute in the Feature to connect (parts of) the feature to a workitem like #Workitem:42 . This is a little unnoticed feature in SpecFlow:
MsTest: Support for MSTest's [Owner] and [WorkItem] attributes with tags like #owner:foo #workitem:123 (Issue 162, Pull 161)
and it creates a WorkItemAttribute attached to the method that is connected to that tagged Scenario (Outline) or Feature.
Then, we imported all testcases into MTM with the Test Case Management tool and ran a custom made tool (making use of the TeamFoundation namespace and the TestManagement and WorkItemTracking Client) that connected each imported testcase to the correct workitem.
Whenever a test did run we could see the results in MTM, but also from the perspective of the connected workitem.
Does anybody know of an integration between Rally ALM and robotframework?
I'm looking for something that would log test results in robotframework back to Rally test cases.
With the pyral rally module for Python, seems like it could be fairly straightforward.
As far as I can tell there is nothing out there to do this-- but its pretty easy to do, only needing about 50 lines of python code for a simple integration that logs robot framework to Rally test case results.
In my case, I have it log results for any test who's name starts with a Rally test case id: (e.g. "TCXXXX My Test Name").
The trick is to user the RobotFramework listener API (See: Elapsed time and result of a test in variables) and pyral, the Rally python API. Key for my needs was to define an "end_test" listener:
def end_test(self, name, attrs):
match = re.search('^(TC\d+)\s*(.*)', name)
tcId = match.group(1)
testName = match.group(2)
if tcId:
tcr = self.__logTestCaseResultToRally(tcId, testName, attrs)
self.__cleanTestCaseState()
In robotframework, I include this listener file, which also has some additional methods to add attachments and other information like notes to a test result (these can be directly called as libraries in your robotframework file):
def addAttachment(self, attachment):
if os.path.isfile(attachment) and os.access(attachment, os.R_OK):
self.attachments.append(attachment)
This method simply saves the attachment path in the listener object so that when end_test() is called, it has access to the file names to attach to the rally test case. __cleanTestCaseState() zeros these out so they are cleared before the next test cast starts.
Actually, I've never used Rally!
But in my opinion, With robot framework, I like using Testlink for testcase management system & jenkin for CI control system :)
You can search in the internet for installation.
Hope it useful :)