How to execute all the methods sequentially in testng - selenium

I have many methods in my class , when i run the code ,methods are called randomly , but in my class ,every method depends on its predeccessor ,ie 2nd method depends on the 1st method , 3rd method depends on the 2nd method and so on .I want to exectue all the methods sequentially
I have tried the below metods and tested the code ,but still the methods are called randomly
//using sequential
#Test(sequential = true)
public void Method1(){
}
#Test(sequential = true)
public void Method2(){
}
//using singleThreaded
#Test(singleThreaded=true)
public void Method1(){
}
#Test(singleThreaded=true)
public void Method2(){
}
I have passed the following parameter in the testng as well
<test name="Test" preserve-order="true" annotations="JDK">
<classes>
<class name="com.test" >
<methods>
<include name="method1"/>
<include name="method2"/>
<include name="method3"/>...
</methods>
</class>
</classes>
</test>
</suite>
When I tested it with #Test(dependsOnMethod="") ,instead of executing the methods sequentially, the methods are getting skipped.
How to execute the test sequentially in testng?

if you want to run your all test method in some specific manner, than just add priority in your #test annotation. see following:-
#test(priority=0)
function1()
{}
#test(priority=1)
function2()
{}
#test(priority=5)
function3()
{}
#test(priority=3)
function4()
{}
#test(priority=2)
function5()
{}
In this case function1 call first than function2, after function 5 will call instead of function3 because of priority.

Dependsonmethods will make your tests as skipped if the method they depend on fails. This is logically correct since if testB depends on testA then if testA fails, then there's no use of running testB. If all you need is testB runs after testA and testB doesn't depend on the result of testA, then add a alwaysrun=true to your #test annotation. These are known as. Soft dependencies. Refer here.

First off, you don't need the sequential or singleThreaded parameters. Writing the single methods in the suite is all you need. Your problem probably lies somewhere else. Make sure you're starting the test using your suite, not the class itself.
In case you don't want to use suites every time (because it's tedious, error-prone and unflexible) here are a few solutions to this problem.
Put dependent methods into one method and make excessive use of Reporter.log(String, boolean). This is similar to System.out.println(String), but also saves the String to TestNG reports. For the second argument you always want to pass true - it tells whether the message should also be printed to STDOUT. When doing this before every step, the test output alone should be enough to identify problematic (read failing) test steps.
Additionally when doing this you have the possibility of using Soft Assertsions. This basically means that you don't have to abort and fail a whole test only because one optional step doesn't work. You can continue until next critical point and abort then or at the end. The errors will still be saved and you can decide whether you want to mark a test run as failed or unstable.
Use #Test(priority = X) where X is a number. Mark all your test methods with a priority and they'll be executed in the order of the priority number from lowest to highest. The upside of this is that you can put methods inbetween steps and the single annotations are independent. The downside however is that this doesn't force any hard dependencies, but only the order. I.e. if method testA with priority=1 fails, method testB with priority=2 will be executed nonetheless.
You probably could work around this using a listener though. Haven't tried this yet.
Use #Test(dependsOnMethods = {"testA"}). Note that the argument here is not a string, but a list of strings (you have that wrong in your post). The upside is hard dependencies, meaning when testB depends on testA, a failure of testA marks testB as skipped. The downside of this annotation is that you have to put all your methods in a very strict chain where every method depends on one other method. If you break this chain, e.g. having multiple methods not depend on anything or having some methods depend on same methods, you'll get into hell's kitchen...
Using both priority and dependsOnMethods together doesn't get you where you'd expect it to unfortunately. Priority is mostly ignored when using hard dependencies.

You can control the execution using #Priority with Listeners. See this link - http://beust.com/weblog/2008/03/29/test-method-priorities-in-testng/

Try to use dependsOnMethods dependency from TestNG framework.

I assume, based on your annotations, that you're using TestNG. I agree with others here that 'sequential' is not the way to go.
Option 1: dependsOnMethods
If your intent is that downstream methods are not even attempted when an upstream dependency fails, try dependsOnMethods instead. With this setup, if an earlier test fails, further tests are skipped (rather than failed).
Like this:
// using dependsOnMethods
#Test
public void method1(){
// this one passes
}
#Test(dependsOnMethods = {"method1"})
public void method2(){
fail("assume this one fails");
}
#Test(dependsOnMethods = {"method1"})
public void method3(){
// this one runs, since it depends on method1
}
#Test(dependsOnMethods = {"method2"})
public void method4(){
// this one is skipped, since it depends on method2
}
Option 2: priority
If your intent is that all tests are executed, regardless of whether upstream test pass, you can use priority.
Like this:
Other notes
I agree wholeheartedly with sircapsalot, that tests should be small and self-contained. If you're using a lot of dependencies, you probably have a problem with your overall test framework.
That said, there are instances where some tests need to run first, others need to run last, etc. And there are instances where tests should be skipped if others fail. For example, if you have tests "updateOneRecord" and "updateManyRecords", you could logically skip multiple if one didn't work.
(And as a final note, your method names should begin with lowercase characters.)

This might not be the best design. I'd advise that you make an attempt to disperse from this design.
Good test architecture requires that each method be SELF-SUFFICIENT and should not depend on other tests to complete before continuing. Why? Because say Test 2 depends on Test 1. Say Test 1 fails.. Now Test 2 will fail.. eventually, you'll have Test 1,2,3,4,5 tests failing, and you don't even know what the cause was.
My recommendation to you would be to create self-sufficient, maintainable, and short tests.
This is a great read which will help you in your endeavours:
http://www.lw-tech.com/q1/ug_concepts.htm

Related

How to measure static test coverage?

So, DLang (effectively) comes with code coverage built in. That's cool.
My issue is, I do a lot of metaprogramming. I tend to test my templates, by static asserts:
template CompileTimeFoo(size_t i)
{
enum CompileTimeFoo = i+3;
}
unittest
{
static assert(CompileTimeFoo!5 == 8);
}
Unfortunately (and obviously), when running tests with coverage, body of CompileTimeFoo will not be counted as "hits" nor "hittable" lines.
Now, let's consider slightly more complicated template:
enum IdentifierLength(alias X) = __traits(identifier, X).length;
void foo(){}
static assert(IdentifierLength!foo == 3);
In this case there still are no "hits", but there is one hittable line (where foo is defined). Because of that my coverage falls to 0% (in this example). If you look at this submodule of my pet project and into it's coverage on codecov, you'll see this exact case - I've prepared a not-that-bad test suite, yet it looks like the whole module is a wildland, because coverage is 0% (or close).
Disclaimer: I want to keep my tests in different source set. This is a matter of taste, but I dislike mixing tests with production code. I don't exactly know what would happen if I wrap foo in version(unittest){...}, but I expect that (since this code will be pushed to compiler) it wouldn't change much.
Again: I DO understand why that happens. I was wondering if there is some trick to work around that? Specifically: is there a way to calculate coverage for things that get called ONLY during compile time?
I can hack testing for coverage sake when I do mixins and just test code-generating things by comparing strings in runtime, but: 1. this would be ugly, and 2. it doesn't cover the case above.

Gmock - InSequence vs RetiresOnSaturation

I don't understand the following gmock example:
{
InSequence s;
for (int i = 1; i <= n; i++) {
EXPECT_CALL(turtle, GetX())
.WillOnce(Return(10*i))
.RetiresOnSaturation();
}
}
When I remove .RetiresOnSaturation() the above code works the same way - GetX returns 10, 20 and so on. What is the reason to use .RetiresOnSaturation() when we also use InSequence object ? Could you explain that ?
In the exact example given RetiresOnSaturation() doesn't change anything. Once the final expectation in the sequence is saturated that expectation remains active but saturated. A further call would cause the test to fail.
RetiresOnSaturation() is generally used when overlaying expectations. For example:
class Turtle {
public:
virtual int GetX() = 0;
};
class MockTurtle : public Turtle {
public:
MOCK_METHOD0(GetX, int());
};
TEST(GmockStackoverflow, QuestionA)
{
MockTurtle turtle;
// General expectation - Perhaps set on the fixture class?
EXPECT_CALL(turtle, GetX()).WillOnce(Return(0));
// Extra expectation
EXPECT_CALL(turtle, GetX()).WillOnce(Return(10)).RetiresOnSaturation();
turtle.GetX();
turtle.GetX();
}
This property can be used in combination with InSequence when the sequence of expected events overlays another expectation. In this scenario the last expectation in the sequence must be marked RetiresOnSaturation(). Note that only the last expectation needs to be marked because when an expectation in sequence is saturated it retires the prerequisite expectations.
The example below demonstrates how this might work out in practice. Removing RetiresOnSaturation() causes the test to fail.
TEST(GmockStackoverflow, QuestionB)
{
MockTurtle turtle;
EXPECT_CALL(turtle, GetX()).WillOnce(Return(0));
{
InSequence s;
EXPECT_CALL(turtle, GetX()).WillOnce(Return(10));
EXPECT_CALL(turtle, GetX()).WillOnce(Return(10)).RetiresOnSaturation();
}
turtle.GetX();
turtle.GetX();
turtle.GetX();
}
From my experience, some (ok, possibly many) developers have a problem, like a gtest error message, discover that RetiresOnSaturation() makes the problem go away, and then get into the habit of liberally sprinkling RetiresOnSaturation() throughout their unit tests – because it solves problems. This is apparently easier than reasoning about what the test case is supposed to accomplish. On the other hand, I like to think in terms of what has to happen (according to the documented API contract) in what order – which can be a partial order if you use After() or don't have everything in the same sequence – and that makes more expressive constructs like InSequence or After() come naturally to my mind.
So, as Adam Casey stated, there is no technical reason, but IMO there could be an issue of magical thinking or insufficient training.
I recommend avoiding RetiresOnSaturation(). There are some general issues with it (like causing confusing warning messages, see example below), but mostly it is too low level when compared to the alternatives, and is almost never needed if you have clean contracts, and use the previously mentioned alternatives correctly. You could say it's the goto of gtest expectations…
Addendum A: Example of when a gratuitous RetiresOnSaturation() makes for worse messages, and yes, I have seen such code:
EXPECT_CALL(x, foo()).WillOnce(Return(42)).RetiresOnSaturation();
If x.foo() is called more than once, let's say twice, then, without RetiresOnSaturation(), you would have received an error message like "No matching expectation for foo() … Expected: to be called once … Actual: called twice (oversaturated)", which is about as specific as possible. But because of RetiresOnSaturation(), you will only get an "Unexpected function call foo()" warning, which is confusing and meaningless.
Addendum B: In your example, it is also possible that a refactoring to use InSequence was made after the fact, and the person doing the refactoring didn't realize that RetiresOnSaturation() was now redundant. You could do a "blame" in your version control system to check.

How do I mark tests as Passed/Skipped/Ignored in serenity?

I keep getting phantomjs error - unreachablebrowserexception.
I want to mark the test as skipped or passed in the catch block of this managed exception. How do I do that?
You can use JUnit's Assume class at the beginning of your test to mark the test as ignored/skipped based on a condition.
For example:
#Test
public void myTest() {
Assume.assumeTrue(yourBooleanCondition);
// continue your test steps...
}
You can read about its different applications here.
However, being able to mark a failed test as passed is against the test mantra and defeats the purpose of developing such code entirely. I do not know any framework that would allow you to do that. If you absolutely have to, my guess is to fiddle with the results.

Equivalent of never(mock) in JMockit

I'm migrating some test cases from JMock to JMockit. It's been a pleasant journey so far but there's one feature from JMock that I'm not able to find in JMockit (version 0.999.17)
I want to check that a mock is never called (any method).
With JMock, all I needed is the following in my Expectations block:
never(mock)
Is it feasible somehow with JMockit?
EDIT:
I might have found a solution but it's not very explicit.
If I put any method of this mock with times =0 in my Expectations block then this mock becomes strict and I believe any method called would trigger an exception.
Try an empty full verification block, it should verify that no invocations occurred on any given mocks:
#Test
public void someTest(#Mocked SomeType mock)
{
// Record expectations on other mocked types...
// Exercise the tested code...
new FullVerifications(mock) {};
}

Unit Testing in Visual Studio

I'm wanting to create a load of unit tests to make sure my stored procedures are working, but i'm failing (i'm new to tests in visual studio).
Basically I want to do the following:
<testclass()>
Dim myglobalvariable as integer
<testmethod()>
Public sub test()
-> use stored procedure to insert a record
set myglobalvariable = result from the sp
end sub
public sub test2()
-> use a stored procedure to modify the record we just added
end sub
public sub test3()
-> use a stored procedure to delete the record we just added
end sub
end class
The problem is because the tests don't run sequentially, tests 2 and 3 fail because the global variable isn't set.
Advise? :'(
The key word here is 'unit'.
A unit test should be self-contained, i.e. be comprised of the code to perform the test, and should not rely on other tests being executed first, or affect the operation of other tests.
See the list of TDD anti-patterns here for things that you should avoid when writing tests.
http://blog.james-carr.org/2006/11/03/tdd-anti-patterns/
Check out the TestInitializeAttribute. You would place this on a method that should run before every test to allocate the appropriate resources.
One side note since it looks like you're misinterpreting how these should work: Unit tests should not require artifacts from other tests. If you're testing modifications, the initialize / setup method(s) should create the space that's to be modified.
Check out Why TestInitialize gets fired for every test in my Visual Studio unit tests?
I think that will point you in the correct direction. Instead of running it as a test, you could run it as a TestInitialize.
There are 'Ordered tests' but it breaks the idea of each test running independent.
First off, the test you describe doesn't sound like a unit test, but more like an integration test. A unit test is typically testing a unit of functionality in your code, isolated from the rest of the system, and runs in memory. An integration test aims at verifying that the components of the system, when assembled together, work as intended.
Then, without going into the details of the system, it looks to me that I would approach it as one single test, calling multiple methods - something along the lines of:
[Test]
public void Verify_CreateUpdateDelete()
{
CreateEntity();
Assert that the entity exists
UpdateEntity();
Assert that the entity has been updated
DeleteEntity();
Assert that the entity has been deleted
}