Is there a way to set priorities among classes in TestNG? - selenium

Is there a way to set priorities among classes execution in TestNG?
Is there some annotation or xml settings?
Thanks.

It can be achieved in testng.xml file. Order of classes is important in the file, and just run directly this file.

#Test(priority = 1)
public void testMethodA() {
System.out.println("Executing - testMethodA");
}
#Test
public void testMethodB() {
System.out.println("Executing - testMethodB");
}
#Test(priority = 2)
public void testMethodC() {
System.out.println("Executing - testMethodC");
}
Output
Executing - testMethodB
Executing - testMethodA
Executing-testMethodC
testMethodB got executed first as it had a default priority of 0

Related

testng - what is the maximum value for priority for #Test annotation

I have ordered the automated test in a particular sequence by using the priority=xxx in #Test annotation.
For the last class to be tested, the priority values started with 10201 and above. However, this particular class was tested right after the 1st class with priorities from 1-10.
Does any one have any idea? I looked at the TestNG documentaion - but the values are not discussed.
I looked into TestNG source code and looks like priority is an int, so the max value will be 2147483647.
In fact, you can test it easily by running following tests:
import org.testng.annotations.Test;
public class Testing {
#Test(priority = 2147483647)
public void testOne() {
System.out.println("Test One");
}
#Test(priority = 1)
public void testTwo() {
System.out.println("Test Two");
}
}

In TechTalk SpecFlow, how do I abandon a Scenario?

My scenario reads a file with hundreds of lines. Each line calls an API Service, but the service may not be running. If I get a non-200 response (available inside the 'Then' method), I want to abandon the Scenario & save time.
How can I tell TechTalk SpecFlow to not carry on with the other tests?
You can use a concept like this .
public static FeatureContext _featureContext;
public binding( FeatureContext featureContext)
{
_featureContext = featureContext;
}
[Given(#"user login")]
public void login(){
// do test
bool testPassed = //set based on test. true or false
binding._featureContext.Current["testPass"] = testPassed;
}
Then in BeforeScenario()
[BeforeScenario(Order = 1)]
public void BeforeScenario()
{
Assert.IsTrue(FeatureContext.Current["testPass"];);
}

How to catch test failure in Selenium Webdriver using JUnit4?

I need to write the testcase Pass/Fail status in an Excel report. How do I catch the result in a simple way? In Ant XML/HTML reports status (Success/Failed) is displayed so I believe there must be some way to catch this..may be in the #After Class.
Can anyone help me out with this?
I am using Selenium Webdriver with JUnit4 and using Apache POI(oh its screwing my mind too!) for excel handling. Let me know in case you need more info.
Thanks :)
P.S: As I am asking a lot of questions it would be great if someone can change or suggest me changes to make these questions and threads helpful for others.
I got your problem and this is what I have do in all my test cases now -
In your test case if you want to check if 2 Strings are equal -
You might be using this code -
Assert.assertTrue(value1.equals(value2));
And if these values are not equal an AssertionError is generated.
Instead you can change your code like this -
String testCaseStatus = "";
if(value1.equals(value2)){
testCaseStatus = "success";
}
else{
testCaseStatus = "fail";
}
Now you store this result in your excel sheet by passing the testCaseStatus to your code which adds a line to your excel which you have implemented using Apache POI. You can also handle the conditions of "error" if you implement a try catch block.
For example you can catch some exception and add the status as error to your excel sheet.
Edited part of answer -
I just figured out on how to use the TestResult class -
This is some sample code I'm posting -
This is the test case class called ExampleTest -
public class ExampleTest implements junit.framework.Test {
#Before
public void setUp() throws Exception {
}
#After
public void tearDown() throws Exception {
}
#Test
public void test(TestResult result) {
try{
Assert.assertEquals("hari", "");
}catch(AssertionFailedError e){
result.addFailure(this, e);
}catch(AssertionError e){
result.addError(this, e);
}
}
#Override
public int countTestCases() {
// TODO Auto-generated method stub
return 0;
}
#Override
public void run(TestResult result) {
test(result);
}
}
I call the above test case from this code -
public class Test {
public static void main(String[] args) {
TestResult result = new TestResult();
TestSuite suite = new TestSuite();
suite.addTest(new ExampleTest());
suite.run(result);
System.out.println(result.errorCount());
}
}
You can call many test cases by just adding them to this suite and then get the entire result using the TestResult class failures() and errors() methods.
You can read more on this from here.

TestNG Test Case failing with JMockit "Invalid context for the recording of expectations"

The following TestNG (6.3) test case generates the error "Invalid context for the recording of expectations"
#Listeners({ Initializer.class })
public final class ClassUnderTestTest {
private ClassUnderTest cut;
#SuppressWarnings("unused")
#BeforeMethod
private void initialise() {
cut = new ClassUnderTest();
}
#Test
public void doSomething() {
new Expectations() {
MockedClass tmc;
{
tmc.doMethod("Hello"); result = "Hello";
}
};
String result = cut.doSomething();
assertEquals(result, "Hello");
}
}
The class under test is below.
public class ClassUnderTest {
MockedClass service = new MockedClass();
MockedInterface ifce = new MockedInterfaceImpl();
public String doSomething() {
return (String) service.doMethod("Hello");
}
public String doSomethingElse() {
return (String) ifce.testMethod("Hello again");
}
}
I am making the assumption that because I am using the #Listeners annotation that I do not require the javaagent command line argument. This assumption may be wrong....
Can anyone point out what I have missed?
The JMockit-TestNG Initializer must run once for the whole test run, so using #Listeners on individual test classes won't work.
Instead, simply upgrade to JMockit 0.999.11, which works transparently with TestNG 6.2+, without any need to specify a listener or the -javaagent parameter (unless running on JDK 1.5).

Grails integration testsuite suite

We have a set of integration test which depend upon same set of static data. Since the amount of data is huge we dont want to set it up per test level. Is it possible to setup data at the start, run group of test and rollback the data at the end of test.
What we effectively want is the rollback at test suite level rather than test case level. We are using grails 1.3.1, any pointers would be highly helpful for us to proceed. Thanks in advance.
-Prakash
for one test case you could use:
#BeforeClass
public static void setUpBeforeClass() throws Exception {
}
#AfterClass
public static void tearDownAfterClass() throws Exception {
}
haven't tried a suite of test cases (yet).
i did have some trouble using findByName in the static methods and had to resort to saving an id and using get.
i did try rolling up a suite, but no joy, getting a: no runnable methods.
You can take control of the transaction/rollback behaviour by marking your test case as non-transactional and managing data, transactions and rollbacks yourself. Example:
class SomeTests extends GrailsUnitTestCase {
static transactional = false
static boolean testDataGenerated = false
protected void setUp() {
if (!testDataGenerated) {
generateTestData()
testDataGenerated = true
}
}
void testSomething() {
...test...
}
void testSomethingTransactionally() {
DomainObject.withTransaction {
...test...
}
}
void testSomethingTransactionallyWithRollback() {
DomainObject.withTransaction { status ->
...test...
status.setRollbackOnly()
}
}
}