How do I mark tests as Passed/Skipped/Ignored in serenity? - serenity-bdd

I keep getting phantomjs error - unreachablebrowserexception.
I want to mark the test as skipped or passed in the catch block of this managed exception. How do I do that?

You can use JUnit's Assume class at the beginning of your test to mark the test as ignored/skipped based on a condition.
For example:
#Test
public void myTest() {
Assume.assumeTrue(yourBooleanCondition);
// continue your test steps...
}
You can read about its different applications here.
However, being able to mark a failed test as passed is against the test mantra and defeats the purpose of developing such code entirely. I do not know any framework that would allow you to do that. If you absolutely have to, my guess is to fiddle with the results.

Related

Call functions from smart contract

could I interact with functions in my deployed contract without using truffle-contract?
I just want to run and play with my functions to check how they work.
I used MyContract.at("0x...").MyFunctionName(parameters,{from:"x0..."});
but it doesn't work.
Any idea or suggestions
Thanks
It's hard to know what you mean by "it's doesn't work", but I guess is that you are not seeing any output when running MyContract.at("0x...").MyFunctionName(parameters,{from:"x0..."}); in the truffle console?
If yes, the reason is because invoking a method to an instance of a contract will give you a Future, and you must handle the result coming back from the call in an asynchronous way. For example, if the function return a value indicating that some computation has happened, you can print the returned value in the console by:
MyContract.at("0x...").MyFunctionName(parameters,{from:"x0..."}).then(console.log)
If you're writing unit tests (to be executed via truffle test), then you can handle the return value by doing this:
MyContract.at("0x...").MyFunctionName(parameters,{from:"x0..."}).then(function(returnedValue) {
// do something with the returnedValue, e.g.
// assert.equal(returnedValue, 3, "The returned value must be 3");
});

How to execute all the methods sequentially in testng

I have many methods in my class , when i run the code ,methods are called randomly , but in my class ,every method depends on its predeccessor ,ie 2nd method depends on the 1st method , 3rd method depends on the 2nd method and so on .I want to exectue all the methods sequentially
I have tried the below metods and tested the code ,but still the methods are called randomly
//using sequential
#Test(sequential = true)
public void Method1(){
}
#Test(sequential = true)
public void Method2(){
}
//using singleThreaded
#Test(singleThreaded=true)
public void Method1(){
}
#Test(singleThreaded=true)
public void Method2(){
}
I have passed the following parameter in the testng as well
<test name="Test" preserve-order="true" annotations="JDK">
<classes>
<class name="com.test" >
<methods>
<include name="method1"/>
<include name="method2"/>
<include name="method3"/>...
</methods>
</class>
</classes>
</test>
</suite>
When I tested it with #Test(dependsOnMethod="") ,instead of executing the methods sequentially, the methods are getting skipped.
How to execute the test sequentially in testng?
if you want to run your all test method in some specific manner, than just add priority in your #test annotation. see following:-
#test(priority=0)
function1()
{}
#test(priority=1)
function2()
{}
#test(priority=5)
function3()
{}
#test(priority=3)
function4()
{}
#test(priority=2)
function5()
{}
In this case function1 call first than function2, after function 5 will call instead of function3 because of priority.
Dependsonmethods will make your tests as skipped if the method they depend on fails. This is logically correct since if testB depends on testA then if testA fails, then there's no use of running testB. If all you need is testB runs after testA and testB doesn't depend on the result of testA, then add a alwaysrun=true to your #test annotation. These are known as. Soft dependencies. Refer here.
First off, you don't need the sequential or singleThreaded parameters. Writing the single methods in the suite is all you need. Your problem probably lies somewhere else. Make sure you're starting the test using your suite, not the class itself.
In case you don't want to use suites every time (because it's tedious, error-prone and unflexible) here are a few solutions to this problem.
Put dependent methods into one method and make excessive use of Reporter.log(String, boolean). This is similar to System.out.println(String), but also saves the String to TestNG reports. For the second argument you always want to pass true - it tells whether the message should also be printed to STDOUT. When doing this before every step, the test output alone should be enough to identify problematic (read failing) test steps.
Additionally when doing this you have the possibility of using Soft Assertsions. This basically means that you don't have to abort and fail a whole test only because one optional step doesn't work. You can continue until next critical point and abort then or at the end. The errors will still be saved and you can decide whether you want to mark a test run as failed or unstable.
Use #Test(priority = X) where X is a number. Mark all your test methods with a priority and they'll be executed in the order of the priority number from lowest to highest. The upside of this is that you can put methods inbetween steps and the single annotations are independent. The downside however is that this doesn't force any hard dependencies, but only the order. I.e. if method testA with priority=1 fails, method testB with priority=2 will be executed nonetheless.
You probably could work around this using a listener though. Haven't tried this yet.
Use #Test(dependsOnMethods = {"testA"}). Note that the argument here is not a string, but a list of strings (you have that wrong in your post). The upside is hard dependencies, meaning when testB depends on testA, a failure of testA marks testB as skipped. The downside of this annotation is that you have to put all your methods in a very strict chain where every method depends on one other method. If you break this chain, e.g. having multiple methods not depend on anything or having some methods depend on same methods, you'll get into hell's kitchen...
Using both priority and dependsOnMethods together doesn't get you where you'd expect it to unfortunately. Priority is mostly ignored when using hard dependencies.
You can control the execution using #Priority with Listeners. See this link - http://beust.com/weblog/2008/03/29/test-method-priorities-in-testng/
Try to use dependsOnMethods dependency from TestNG framework.
I assume, based on your annotations, that you're using TestNG. I agree with others here that 'sequential' is not the way to go.
Option 1: dependsOnMethods
If your intent is that downstream methods are not even attempted when an upstream dependency fails, try dependsOnMethods instead. With this setup, if an earlier test fails, further tests are skipped (rather than failed).
Like this:
// using dependsOnMethods
#Test
public void method1(){
// this one passes
}
#Test(dependsOnMethods = {"method1"})
public void method2(){
fail("assume this one fails");
}
#Test(dependsOnMethods = {"method1"})
public void method3(){
// this one runs, since it depends on method1
}
#Test(dependsOnMethods = {"method2"})
public void method4(){
// this one is skipped, since it depends on method2
}
Option 2: priority
If your intent is that all tests are executed, regardless of whether upstream test pass, you can use priority.
Like this:
Other notes
I agree wholeheartedly with sircapsalot, that tests should be small and self-contained. If you're using a lot of dependencies, you probably have a problem with your overall test framework.
That said, there are instances where some tests need to run first, others need to run last, etc. And there are instances where tests should be skipped if others fail. For example, if you have tests "updateOneRecord" and "updateManyRecords", you could logically skip multiple if one didn't work.
(And as a final note, your method names should begin with lowercase characters.)
This might not be the best design. I'd advise that you make an attempt to disperse from this design.
Good test architecture requires that each method be SELF-SUFFICIENT and should not depend on other tests to complete before continuing. Why? Because say Test 2 depends on Test 1. Say Test 1 fails.. Now Test 2 will fail.. eventually, you'll have Test 1,2,3,4,5 tests failing, and you don't even know what the cause was.
My recommendation to you would be to create self-sufficient, maintainable, and short tests.
This is a great read which will help you in your endeavours:
http://www.lw-tech.com/q1/ug_concepts.htm

Equivalent of never(mock) in JMockit

I'm migrating some test cases from JMock to JMockit. It's been a pleasant journey so far but there's one feature from JMock that I'm not able to find in JMockit (version 0.999.17)
I want to check that a mock is never called (any method).
With JMock, all I needed is the following in my Expectations block:
never(mock)
Is it feasible somehow with JMockit?
EDIT:
I might have found a solution but it's not very explicit.
If I put any method of this mock with times =0 in my Expectations block then this mock becomes strict and I believe any method called would trigger an exception.
Try an empty full verification block, it should verify that no invocations occurred on any given mocks:
#Test
public void someTest(#Mocked SomeType mock)
{
// Record expectations on other mocked types...
// Exercise the tested code...
new FullVerifications(mock) {};
}

Is it meaningful to verifyText() on an element that has just had type() executed on it?

I'm curious about whether the following functional test is possible. I'm working with PHPUnit_Extensions_SeleniumTestCase with Selenium-RC here, but the principle (I think) should apply everywhere.
Suppose I execute the following command on a particular div:
function testInput() {
$locator = $this->get_magic_locator(); // for the sake of abstraction
$this->type( $locator, "Beatles" ); // Selenium API call
$this->verifyText( $locator, "Beatles" ); // Selenium API call
}
Conceptually, I feel that this test should work. I'm entering data into a particular field, and I simply want to verify that the text now exists as entered.
However, the results of my test (the verifyText assertion fails) suggest that the content of the $locator element are empty, even after input.
There was 1 failure:
1) test::testInput
Failed asserting that <string:> matches PCRE pattern "/Beatles/".`
Has anyone else tried anything like this? Should it work? Am I making a simple mistake?
You should use verifyValue(locator,texttoverify) rather than verifyText(locator,value) for validating the textbox values
To answer your initial question ("Is it meaningful ..."), well, maybe. What you're testing at that point is the browser's ability to respond to keystrokes, which would be sort of lame. Unless you've got some JavaScript code wired to some of the field's properties, in which case it might be sort of important.
Standard programmer's answer - "It depends".

Unit Testing in Visual Studio

I'm wanting to create a load of unit tests to make sure my stored procedures are working, but i'm failing (i'm new to tests in visual studio).
Basically I want to do the following:
<testclass()>
Dim myglobalvariable as integer
<testmethod()>
Public sub test()
-> use stored procedure to insert a record
set myglobalvariable = result from the sp
end sub
public sub test2()
-> use a stored procedure to modify the record we just added
end sub
public sub test3()
-> use a stored procedure to delete the record we just added
end sub
end class
The problem is because the tests don't run sequentially, tests 2 and 3 fail because the global variable isn't set.
Advise? :'(
The key word here is 'unit'.
A unit test should be self-contained, i.e. be comprised of the code to perform the test, and should not rely on other tests being executed first, or affect the operation of other tests.
See the list of TDD anti-patterns here for things that you should avoid when writing tests.
http://blog.james-carr.org/2006/11/03/tdd-anti-patterns/
Check out the TestInitializeAttribute. You would place this on a method that should run before every test to allocate the appropriate resources.
One side note since it looks like you're misinterpreting how these should work: Unit tests should not require artifacts from other tests. If you're testing modifications, the initialize / setup method(s) should create the space that's to be modified.
Check out Why TestInitialize gets fired for every test in my Visual Studio unit tests?
I think that will point you in the correct direction. Instead of running it as a test, you could run it as a TestInitialize.
There are 'Ordered tests' but it breaks the idea of each test running independent.
First off, the test you describe doesn't sound like a unit test, but more like an integration test. A unit test is typically testing a unit of functionality in your code, isolated from the rest of the system, and runs in memory. An integration test aims at verifying that the components of the system, when assembled together, work as intended.
Then, without going into the details of the system, it looks to me that I would approach it as one single test, calling multiple methods - something along the lines of:
[Test]
public void Verify_CreateUpdateDelete()
{
CreateEntity();
Assert that the entity exists
UpdateEntity();
Assert that the entity has been updated
DeleteEntity();
Assert that the entity has been deleted
}