TestNG - run tests in order impossible scenario? - testing

I've tried many ways with no success and I'm starting to believe this is not achievable in TestNG and I just like to confirm with you.
I have web service I'm testing and I need to run few basic scenarios.
My current test methods, each with #Test annotation (each need to be testable as a single test):
dbResetTest
clearCacheTest
openURLTest
loginTest
actionXTest
actionYTest
I also need to run these scenarios consisting from above tests run IN ORDER:
Test login feature (openURLTest -> dbResetTest -> clearCacheTest -> loginTest)
Test X after login (openURLTest -> dbResetTest -> clearCacheTest -> loginTest -> actionXTest)
Test Y after clearing cache (clearCacheTest -> actionYTest)
The issue is, if I made tests from point 1 & 2 dependant on others I won't be able to run scenario 3 because clearCacheTest does not depend on any other in this particular scenario. I've tried running those test in order through xml and by using dependencies but with no success.
Of course I could make actionYTest to call clearCacheTest directly but then if clearCacheTest fails the report will show that actionYTest was the failing one which is what I try to avoid.
I'm pretty sure now what I need is not achievable in TestNG but maybe I'm wrong...

I think you should change your tactics slightly. Instead of perceiving these ~(dbResetTest, etc.) as test Classes you should make them test methods instead and use dependsOnMethods programatically (not from XML) instead of dependsOnGroups. Then you will be able to implement your required logic rather easily (every test is unique --> #Test annotation, every test is executed in certain priority --> use priority parameter). Then the 1,2,3 tests should be your test classes. So here it is how you do it:
public class LoginFeature {
#Test (priority=1)
public openURLTest(){
}
#Test (priority=2, dependsOnMethods="openURLTest")
public dbResetTest (){
}
#Test (priority=3, dependsOnMethods="dbResetTest")
public clearCacheTest (){
}
#Test (priority=4, dependsOnMethods="clearCacheTest" )
public loginTest(){
}
}
This way if something fails in between your tests the rest of scenarios will automatically be skipped and you won't need to call directly clearCacheTest.
Hope this helps!
Update
After OP's comment
Well again I think you kinda have of a design issue. For your methods to be called multiple times they need to sit somewhere that they are accessible. You are almost there with your approach but not quite. So here is how you can call the methods; multiple times and run them every time from scratch (I'll show you the code first and then explain in detail):
parent class
public class TestBase{
//include here all your important methods *without* #Test Annotation
public void dbReset(){
//perform db reset
}
public void clearCache(){
//clear browser cache
}
public boolean openURL(){
//try to open test URL
return didIreachTestURLSuccessfully;
}
}
child class
public class loginFeature extends TestBase{
#Test (priority=1)
public void attemptToResetDataBase(){
dbReset();
}
#Test (priority=2, dependsOnMeth0ds="attemptToResetDataBase")
public void clearCacheTest(){
clearCache();
}
#Test (priority=3, dependsOnMeth0ds="clearCacheTest")
public void verifySuccessfulLogin(){
login();
}
}
So, you include all of your test methods in a parent class, called TestBase. Then you create your test (loginTest for example) with a class that extends TestBase. Now you can call your methods multiple times, treat them each time as an individual test and connect them with dependencies according to your needs (i.e I have each one of them depending on the prior method; but you could rearrange them and put them all to depend on one, or to no-one).
Because your test class inherits from TestBase you don't even need to create an object to access the internal methods; you can call them directly instead.
Hope this solves it for you, do not hesitate to write a comment if you need more info.

Related

Testng get group names without the BeforeMethod

I know using this reflection we can surely get list of all groups of the #test.
#BeforeMethod
public void befrMethod(Method met){
Test t = met.getAnnotation(Test.class);
System.out.println(t.groups());
}
But Is there a way to get groups list while inside the #test and not beforemethod?
Because I am running tests in parallel and this method is not working well for me.
Test method supports ITestContext dependency injection
And this class has a method called
getIncludedGroups java.lang.String[] getIncludedGroups() Returns: All
the groups that are included for this test run.
"ITestContext (testng 7.3.0 API)" https://javadoc.io/doc/org.testng/testng/latest/org/testng/ITestContext.html
So try
#Test
public void befrMethod(ITestContext met){

NullpointerException when using multiple #Before in Cucumber-jvm

I am using cucumber-jvm.
I have an init method to initialize all the necessary stuff, such as the browser dimensions, application url etc.
I have put this init method under a #Before (cucumber.api) tag.
#Before
public void initLoginPage() throws Exception {
getBrowserDimension();
setBrowserCapabilities();
init(getApplicationUrl());
}
My life was fine with this running smoothly.
Now, I also wanted to use #Before for some tags at scenario levels.
Say my scenario looks like:
#myTag
When I do blah
Then I should get blah-blah
And I wanted to use something like:
#Before(#myTag)
public void beforeScenario(){
blah = true;
}
But the moment I give it another #Before, it starts giving a NullPointerException. I tracked it back to the runBeforeHooks and runHookIfTagsMatch methods in Cucumber's Runtime class.
They are throwing the exception for the #Before (for initLoginPage()) itself.
Is there a conflict getting created with multiple #Before's?
How can I resolve this?
I found the solution to get this working.
The problem was that any of the #Before codes were getting picked up in a random order. It wasn't based on the assumption that a #Before without parameters will be executed before #Before("myTag").
So the trick is to assign order parameter (in #Before) some value. The default order that gets assigned to #Before is 10000. So, if we define the order value explicitly, it should work.
So basically, my code for initializer could look like:
#Before(order=1)
public void initLoginPage() throws Exception {
getBrowserDimension();
setBrowserCapabilities();
init(getApplicationUrl());
}
That solved my problem

Selenium Grid on Multiple Browsers: should each test case have separate class for each browser?

I'm trying to put together my first Data Driven Test Framework that runs tests through Selenium Grid/WebDriver on multiple browsers. Right now, I have each test case in it's own class, and I parametrize the browser, so it runs each test case once with each browser.
Is this common on big test frameworks? Or, should each test case be copied and fine tuned to each browser in it's own class? So, if I'm testing chrome, firefox, and IE, should there be classes for each, like: "TestCase1Chrome", "TestCase1FireFox", "TestCase1IE"? Or just "TestCase1" and parametrize the test to run 3 times with each browser? Just wondering how others do it.
Parameterizing the tests into a single class per test case makes it easier to maintain the non-browser specific code, while duplicating classes, one for each browser case, makes it easier to maintain the browser-specific code. When I say browser specific code, for example, clicking an item. On ChromeDriver, you cannot click in the middle of some elements, where on FirefoxDriver, you can. So, you potentially need two different blocks of code just to click an element (when it's not clickable in the middle).
For those of you that are employed QA Engineers that use Selenium, what would be best practice here?
I am currently working on a project which runs around 75k - 90k tests on daily basis. We pass the browser as a parameter to the tests. Reasons being:
As you mentioned in your question, this helps in maintenance.
We don't see too many browser-specific code. If you are having too much of browser specific code, then I would say there is a problem with the webdriver itself. Because, one of the advantages of selenium/webdriver is write code once and run it against any supported browser.
The difference I see between my code structure and the one you mentioned in question is, I don't have a test class for each test case. Tests are divided based on the features that I test and each feature will have a class. And that class will hold all the tests as methods. I use testNG so that these methods can be invoked in parallel. May be this won't suite your AUT.
If you keep the code structure that you mention in the question, sooner or later maintaining it will become a nightmare. Try to stick to the rule: the same test code (written once) for all browsers (environments).
This condition will force you to solve two issues:
1) how to run the tests for all chosen browsers
2) how to apply specific browser workarounds without polluting the test code
Actually, this seems to be your question.
Here is how I solved the first issue.
First, I defined all the environments that I am going to test. I call 'environments' all the conditions under which I want to run my tests: browser name, version number, OS, etc. So, separately from test code, I created an enum like this:
public enum Environments {
FF_18_WIN7("firefox", "18", Platform.WINDOWS),
CHR_24_WIN7("chrome", "24", Platform.WINDOWS),
IE_9_WIN7("internet explorer", "9", Platform.WINDOWS)
;
private final DesiredCapabilities capabilities;
private final String browserName;
private final String version;
private final Platform platform;
Environments(final String browserName, final String version, final Platform platform) {
this.browserName = browserName;
this.version = version;
this.platform = platform;
capabilities = new DesiredCapabilities();
}
public DesiredCapabilities capabilities() {
capabilities.setBrowserName(browserName);
capabilities.setVersion(version);
capabilities.setPlatform(platform);
return this.capabilities;
}
public String browserName() {
return browserName;
}
}
It's easy to modify and add environments whenever you need to. As you can notice, I am using this to create and retrieve the DesiredCapabilities that later will be used to create a specific WebDriver.
In order to make the tests run for all the defined environments, I used JUnit's (4.10 in my case) org.junit.experimental.theories:
#RunWith(MyRunnerForSeleniumTests.class)
public class MyWebComponentTestClassIT {
#Rule
public MySeleniumRule selenium = new MySeleniumRule();
#DataPoints
public static Environments[] enviroments = Environments.values();
#Theory
public void sample_test(final Environments environment) {
Page initialPage = LoginPage.login(selenium.driverFor(environment), selenium.getUserName(), selenium.getUserPassword());
// your test code here
}
}
The tests are annotated as #Theory (not as #Test, like in normal JUnit tests) and are passed a parameter. Each test will run then for all the defined values of this parameter, which should be an array of values annotated as #DataPoints. Also, you should use a runner that extends from org.junit.experimental.theories.Theories. I use org.junit.rules to prepare my tests, putting there all the necessary plumbing. As you can see I get the specific capabilities driver through the Rule, too. Though you could use the following code right in your test:
RemoteWebDriver driver = new RemoteWebDriver(new URL(some_url_string), environment.capabilities());
The point is that having it in the Rule you write the code once and use it for all your tests.
As for Page class, it is a class where I put all the code that uses driver's functionality (find an element, navigate, etc.). This way, again, the test code stays neat and clear and, again, you write it once and use it in all your tests.
So, this is the solution for the first issue. (I know that you can do a similar thing with TestNG, but I didn't try it.)
To solve the second issue, I created a special package where I keep all the code of browser specific workarounds. It consists of an abstract class, e.g. BrowserSpecific, that contains the common code which happens to be different (or have a bug) in some browser. In the same package I have classes specific for every browser used in tests and each of them extends BrowserSpecific.
Here is how it works for the Chrome driver bug that you mention. I create a method clickOnButton in BrowserSpecific with the common code for the affected behaviour:
public abstract class BrowserSpecific {
protected final RemoteWebDriver driver;
protected BrowserSpecific(final RemoteWebDriver driver) {
this.driver = driver;
}
public static BrowserSpecific aBrowserSpecificFor(final RemoteWebDriver driver) {
BrowserSpecific browserSpecific = null;
if (Environments.FF_18_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new FireFoxSpecific(driver);
}
if (Environments.CHR_24_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new ChromeSpecific(driver);
}
if (Environments.IE_9_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new InternetExplorerSpecific(driver);
}
return browserSpecific;
}
public void clickOnButton(final WebElement button) {
button.click();
}
}
and then I override this method in the specific class, e.g. ChromeSpecific, where I place the workaround code:
public class ChromeSpecific extends BrowserSpecific {
ChromeSpecific(final RemoteWebDriver driver) {
super(driver);
}
#Override
public void clickOnButton(final WebElement button) {
// This is the Chrome workaround
String script = MessageFormat.format("window.scrollTo(0, {0});", button.getLocation().y);
driver.executeScript(script);
// Followed by common behaviour of all the browsers
super.clickOnButton(button);
}
}
When I have to take into account the specific behaviour of some browser, I do the following:
aBrowserSpecificFor(driver).clickOnButton(logoutButton);
instead of:
button.click();
This way, in my common code, I can identify easily where the workaround has been applied and I keep the workarounds isolated from the common code. I find it easy to maintain, as the bugs are usually being solved and the workarounds may or should be changed or eliminated.
One last word about executing the tests. As you are going to use Selenium Grid you will want to use the possibility to run the tests in parallel, so remember to configure this feature for your JUnit tests (available since v. 4.7).
We use testng in our organization and we use the parameter option that testng gives to specify the enviroment, i.e. the browser to use, the machine to run on and any other config that is required for env config. The browsername is sent through the xml file which controls what needs to run and where. It is set as a global variable. What we have done as an extra is, we have our custom annotations which can override these global variables i.e. if a test is very specifically only to be run on chrome and no other browser, then we specify the same on the custom annotation. So, no matter even if the parameter is say run on FF, if it is annotated with chrome, it would always run on chrome.
I somehow believe making one class for each browser is not a good idea. Imagine the flow changes or there is a bit of here and there and you have 3 classes to change instead of one. And if the number of browsers increase, then one more class.
What I would suggest is to have code that is browserspecific to be extracted out. So, if the click behavior is browser specific, then override to it to do appropriate checks or failure handlings based on browsers.
I do it like this but keep in mind that this is pure WebDriver without the Grid or RC in mind:
// Utility class snippet
// Test classes import this with: import static utility.*;
public static WebDriver driver;
public static void initializeBrowser( String type ) {
if ( type.equalsIgnoreCase( "firefox" ) ) {
driver = new FirefoxDriver();
} else if ( type.equalsIgnoreCase( "ie" ) ) {
driver = new InternetExplorerDriver();
}
driver.manage().timeouts().implicitlyWait( 10000, TimeUnit.MILLISECONDS );
driver.manage().window().setPosition(new Point(200, 10));
driver.manage().window().setSize(new Dimension(1200, 800));
}
Now, using JUnit 4.11+ your parameters file needs to look something like this:
firefox, test1, param1, param2
firefox, test2, param1, param2
firefox, test3, param1, param2
ie, test1, param1, param2
ie, test2, param1, param2
ie, test3, param1, param2
Then, using a single .CSV parameterized test class (that you intend to start multiple browser types with), in the #Before annotated method, do this:
If the current parameter test is the first test of this browser type, and no already open windows exist, open a new browser window of the current type.
If a browser is already open and the browser type is the same, then just re-use the same driver object.
if a browser is open of a different type that the current test, then close the browser and re-open a browser of the correct type.
Of course, my answer doesn't tell you how to handle the parameters: I leave that for you to figure out.

Test Case Structure

I am working on a Selenium project and have certain doubts in coverting a Manual Test case into a Selenium Test Script.
Assume I have 2 Test case as follows,
First case:
1. NAvigate to GMAIL
2. Login to Gmail with valid username and password
3. Check Inbox for New Emails.
4. Read the email
5. Signout
Second case:
1. NAvigate to GMAIL
2. Login to Gmail with valid username and password
3. Compose an email
4. send the email
5. Signout
My DOUBTS
Is each Test case is one Class in Java ?
Is Each test Step is a method in java ?
Thanks, some inputs would help me.
It Depends on the complexity and usability of your java-selenium code.
Is each Test case is one Class in Java ?
---> In this case, you can write a method for login functinality, where you will pass username and password as arguments to that method. This method can be called inside any class(any class you are writing to test any other test case also) whenever you need to login.
So, a test case can be a class. If it is a single class, it will be helpful for debugging and maintaining purpose. If the test case is too complex, you can split the functionlity into two classes or more.
Is Each test Step is a method in java ?
---> Yes, it can be. when you are checking the login or signout functionality, you will be calling login method or signout method respectively. Sometimes, if the method cannot be reused and it is specific to an application only, then it will not be a method. You need to explicitly write all the logic instead of calling the already existing method.
It's based on your requirement.
example
public class gmailTest()
{
#BeforeClass
public void beforeClass()
{
1. Navigate to Gmail
2. sign in
}
#BeforeMethod
public void beforeMethod()
{
}
#Test
public void testInbox()
{
Check Inbox for New Emails , Read the email
}
#Test
public void testInbox()
{
Compose an email , send the email
}
#AfterMethod
public void afterMethod()
{
}
#BeforeClass
public void afterClass()
{
signout
}
}
#BeforeClass: The annotated method will be run before the first test method in the current class is invoked.
#AfterClass: The annotated method will be run after all the test methods in the current class have been run.
#BeforeMethod: The annotated method will be run before each test method.
#AfterMethod: The annotated method will be run after each test method.
For more info regarding testng click here
I once had the exact same problem (but I'm using Python).
So this is what I've done:
1) Each class is the Test Case
2) Each method is a Test Step
3) Within the class setup and tear down completely to the initial point. (So it can be used for distribution later)
4) Create the logic of "if one method fails -> the rest of the methods in the class are not run (failed automatically)"
5) (!!) create the logic of "if the method changes the state then add a 'tear down' for it"

How do you mock your repositories?

I've use Moq to mock my repositories. However, someone recently said that they prefer to create hard-coded test implementations of their repository interfaces.
What are the pros and cons of each approach?
Edit: clarified meaning of repository with link to Fowler.
I generally see two scenarios with repositories. I ask for something, and I get it, or I ask for something, and it isn't there.
If you are mocking your repository, that means you system under test (SUT) is something that is using your repository. So you generally want to test that your SUT behaves correctly when it is given an object from the repository. And you also want to test that it handles the situation properly when you expect to get something back and don't, or aren't sure if you are going to get something back.
Hard-coded test doubles are ok if you are doing integration testing. Say, you want to save an object, and then get it back. But this is testing the interaction of two objects together, not just the behavior of the SUT. They are two different things. If you start coding fake repositories, you need unit tests for those as well, otherwise you end up basing the success and failure of your code on untested code.
That's my opinion on Mocking vs. Test Doubles.
SCNR:
"You call yourself a repository? I've seen matchboxes with more capacity!"
I assume that by "repository" you mean a DAO; if not then this answer won't apply.
Lately I've been making "in memory" "mock" (or test) implementations of my DAO, that basically operate off of data (a List, Map, etc.) passed into the mock's constructor. This way the unit test class is free to throw in whatever data it needs for the test, can change it, etc., without forcing all unit tests operating on the "in memory" DAO to be coded to use the same test data.
One plus that I see in this approach is that if I have a dozen unit tests that need to use the same DAO for their test (to inject into the class under test, for example), I don't need to remember all of the details of the test data each time (as you would if the "mock" was hardcoded) - the unit test creates the test data itself. On the downside, this means each unit test has to spend a few lines creating and wiring up it's test data; but that's a small downside to me.
A code example:
public interface UserDao {
User getUser(int userid);
User getUser(String login);
}
public class InMemoryUserDao implements UserDao {
private List users;
public InMemoryUserDao(List users) {
this.users = users;
}
public User getUser(int userid) {
for (Iterator it = users.iterator(); it.hasNext();) {
User user = (User) it.next();
if (userid == user.getId()) {
return user;
}
}
return null;
}
public User getUser(String login) {
for (Iterator it = users.iterator(); it.hasNext();) {
User user = (User) it.next();
if (login.equals(user.getLogin())) {
return user;
}
}
return null;
}
}